2026-03-10T13:37:30.144 INFO:root:teuthology version: 1.2.4.dev6+g1c580df7a 2026-03-10T13:37:30.149 DEBUG:teuthology.report:Pushing job info to http://localhost:8080 2026-03-10T13:37:30.167 INFO:teuthology.run:Config: archive_path: /archive/kyr-2026-03-10_01:00:38-orch-squid-none-default-vps/1053 branch: squid description: orch/cephadm/workunits/{0-distro/ubuntu_22.04 agent/off mon_election/classic task/test_monitoring_stack_basic} email: null first_in_suite: false flavor: default job_id: '1053' ktype: distro last_in_suite: false machine_type: vps name: kyr-2026-03-10_01:00:38-orch-squid-none-default-vps no_nested_subset: false os_type: ubuntu os_version: '22.04' overrides: admin_socket: branch: squid ansible.cephlab: branch: main skip_tags: nagios,monitoring-scripts,hostname,pubkeys,zap,sudoers,kerberos,ntp-client,resolvconf,cpan,nfs vars: timezone: UTC ceph: conf: global: mon election default strategy: 1 mgr: debug mgr: 20 debug ms: 1 mgr/cephadm/use_agent: false mon: debug mon: 20 debug ms: 1 debug paxos: 20 osd: debug ms: 1 debug osd: 20 osd mclock iops capacity threshold hdd: 49000 flavor: default log-ignorelist: - \(MDS_ALL_DOWN\) - \(MDS_UP_LESS_THAN_MAX\) - MON_DOWN - mons down - mon down - out of quorum - CEPHADM_STRAY_DAEMON - CEPHADM_FAILED_DAEMON log-only-match: - CEPHADM_ sha1: e911bdebe5c8faa3800735d1568fcdca65db60df ceph-deploy: conf: client: log file: /var/log/ceph/ceph-$name.$pid.log mon: {} install: ceph: flavor: default sha1: e911bdebe5c8faa3800735d1568fcdca65db60df extra_system_packages: deb: - python3-xmltodict - python3-jmespath rpm: - bzip2 - perl-Test-Harness - python3-xmltodict - python3-jmespath workunit: branch: tt-squid sha1: 75a68fd8ca3f918fe9466b4c0bb385b7fc260a9b owner: kyr priority: 1000 repo: https://github.com/ceph/ceph.git roles: - - host.a - mon.a - mgr.a - osd.0 - - host.b - mon.b - mgr.b - osd.1 - - host.c - mon.c - osd.2 seed: 8043 sha1: e911bdebe5c8faa3800735d1568fcdca65db60df sleep_before_teardown: 0 subset: 1/64 suite: orch suite_branch: tt-squid suite_path: /home/teuthos/src/github.com_kshtsk_ceph_75a68fd8ca3f918fe9466b4c0bb385b7fc260a9b/qa suite_relpath: qa suite_repo: https://github.com/kshtsk/ceph.git suite_sha1: 75a68fd8ca3f918fe9466b4c0bb385b7fc260a9b targets: vm00.local: ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBACx3QdVGJoZw9ykbUguGBx6Y9rFpgLERPcSoIfAh5v4HrMsiLXlDsML+I3hazP1aB1bSlLD5uEqovB5R0Kbl68= vm07.local: ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBE1u+zt29uyXufdnTkIm1oFwpIRJVCb7+7UMvqImPOHxoC56JdkExUUsdtaVONH6IoUsZ0goggPhZAo1qXZ4Lj4= vm08.local: ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBHBPesMpKNxPiLso2+LqoCPXfB1UQxnMvE1lTud48ml54yMz/79p/l+32vGXPT6rdUCThS0aMQv4GNWiFI5YyG8= tasks: - install: null - cephadm: null - cephadm.shell: host.a: - "set -e\nset -x\nceph orch apply node-exporter\nceph orch apply grafana\nceph\ \ orch apply alertmanager\nceph orch apply prometheus\nsleep 240\nceph orch\ \ ls\nceph orch ps\nceph orch host ls\nMON_DAEMON=$(ceph orch ps --daemon-type\ \ mon -f json | jq -r 'last | .daemon_name')\nGRAFANA_HOST=$(ceph orch ps --daemon-type\ \ grafana -f json | jq -e '.[]' | jq -r '.hostname')\nPROM_HOST=$(ceph orch\ \ ps --daemon-type prometheus -f json | jq -e '.[]' | jq -r '.hostname')\nALERTM_HOST=$(ceph\ \ orch ps --daemon-type alertmanager -f json | jq -e '.[]' | jq -r '.hostname')\n\ GRAFANA_IP=$(ceph orch host ls -f json | jq -r --arg GRAFANA_HOST \"$GRAFANA_HOST\"\ \ '.[] | select(.hostname==$GRAFANA_HOST) | .addr')\nPROM_IP=$(ceph orch host\ \ ls -f json | jq -r --arg PROM_HOST \"$PROM_HOST\" '.[] | select(.hostname==$PROM_HOST)\ \ | .addr')\nALERTM_IP=$(ceph orch host ls -f json | jq -r --arg ALERTM_HOST\ \ \"$ALERTM_HOST\" '.[] | select(.hostname==$ALERTM_HOST) | .addr')\n# check\ \ each host node-exporter metrics endpoint is responsive\nALL_HOST_IPS=$(ceph\ \ orch host ls -f json | jq -r '.[] | .addr')\nfor ip in $ALL_HOST_IPS; do\n\ \ curl -s http://${ip}:9100/metric\ndone\n# check grafana endpoints are responsive\ \ and database health is okay\ncurl -k -s https://${GRAFANA_IP}:3000/api/health\n\ curl -k -s https://${GRAFANA_IP}:3000/api/health | jq -e '.database == \"ok\"\ '\n# stop mon daemon in order to trigger an alert\nceph orch daemon stop $MON_DAEMON\n\ sleep 120\n# check prometheus endpoints are responsive and mon down alert is\ \ firing\ncurl -s http://${PROM_IP}:9095/api/v1/status/config\ncurl -s http://${PROM_IP}:9095/api/v1/status/config\ \ | jq -e '.status == \"success\"'\ncurl -s http://${PROM_IP}:9095/api/v1/alerts\n\ curl -s http://${PROM_IP}:9095/api/v1/alerts | jq -e '.data | .alerts | .[]\ \ | select(.labels | .alertname == \"CephMonDown\") | .state == \"firing\"'\n\ # check alertmanager endpoints are responsive and mon down alert is active\n\ curl -s http://${ALERTM_IP}:9093/api/v2/status\ncurl -s http://${ALERTM_IP}:9093/api/v2/alerts\n\ curl -s http://${ALERTM_IP}:9093/api/v2/alerts | jq -e '.[] | select(.labels\ \ | .alertname == \"CephMonDown\") | .status | .state == \"active\"'\n" teuthology: fragments_dropped: [] meta: {} postmerge: [] teuthology_branch: clyso-debian-13 teuthology_repo: https://github.com/clyso/teuthology teuthology_sha1: 1c580df7a9c7c2aadc272da296344fd99f27c444 timestamp: 2026-03-10_01:00:38 tube: vps user: kyr verbose: false worker_log: /home/teuthos/.teuthology/dispatcher/dispatcher.vps.611473 2026-03-10T13:37:30.167 INFO:teuthology.run:suite_path is set to /home/teuthos/src/github.com_kshtsk_ceph_75a68fd8ca3f918fe9466b4c0bb385b7fc260a9b/qa; will attempt to use it 2026-03-10T13:37:30.168 INFO:teuthology.run:Found tasks at /home/teuthos/src/github.com_kshtsk_ceph_75a68fd8ca3f918fe9466b4c0bb385b7fc260a9b/qa/tasks 2026-03-10T13:37:30.168 INFO:teuthology.run_tasks:Running task internal.check_packages... 2026-03-10T13:37:30.168 INFO:teuthology.task.internal:Checking packages... 2026-03-10T13:37:30.168 INFO:teuthology.task.internal:Checking packages for os_type 'ubuntu', flavor 'default' and ceph hash 'e911bdebe5c8faa3800735d1568fcdca65db60df' 2026-03-10T13:37:30.168 WARNING:teuthology.packaging:More than one of ref, tag, branch, or sha1 supplied; using branch 2026-03-10T13:37:30.168 INFO:teuthology.packaging:ref: None 2026-03-10T13:37:30.168 INFO:teuthology.packaging:tag: None 2026-03-10T13:37:30.168 INFO:teuthology.packaging:branch: squid 2026-03-10T13:37:30.168 INFO:teuthology.packaging:sha1: e911bdebe5c8faa3800735d1568fcdca65db60df 2026-03-10T13:37:30.168 DEBUG:teuthology.packaging:Querying https://shaman.ceph.com/api/search?status=ready&project=ceph&flavor=default&distros=ubuntu%2F22.04%2Fx86_64&ref=squid 2026-03-10T13:37:30.914 INFO:teuthology.task.internal:Found packages for ceph version 19.2.3-678-ge911bdeb-1jammy 2026-03-10T13:37:30.915 INFO:teuthology.run_tasks:Running task internal.buildpackages_prep... 2026-03-10T13:37:30.916 INFO:teuthology.task.internal:no buildpackages task found 2026-03-10T13:37:30.916 INFO:teuthology.run_tasks:Running task internal.save_config... 2026-03-10T13:37:30.916 INFO:teuthology.task.internal:Saving configuration 2026-03-10T13:37:30.921 INFO:teuthology.run_tasks:Running task internal.check_lock... 2026-03-10T13:37:30.922 INFO:teuthology.task.internal.check_lock:Checking locks... 2026-03-10T13:37:30.927 DEBUG:teuthology.task.internal.check_lock:machine status is {'name': 'vm00.local', 'description': '/archive/kyr-2026-03-10_01:00:38-orch-squid-none-default-vps/1053', 'up': True, 'machine_type': 'vps', 'is_vm': True, 'vm_host': {'name': 'localhost', 'description': None, 'up': True, 'machine_type': 'libvirt', 'is_vm': False, 'vm_host': None, 'os_type': None, 'os_version': None, 'arch': None, 'locked': True, 'locked_since': None, 'locked_by': None, 'mac_address': None, 'ssh_pub_key': None}, 'os_type': 'ubuntu', 'os_version': '22.04', 'arch': 'x86_64', 'locked': True, 'locked_since': '2026-03-10 13:35:55.372758', 'locked_by': 'kyr', 'mac_address': '52:55:00:00:00:00', 'ssh_pub_key': 'ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBACx3QdVGJoZw9ykbUguGBx6Y9rFpgLERPcSoIfAh5v4HrMsiLXlDsML+I3hazP1aB1bSlLD5uEqovB5R0Kbl68='} 2026-03-10T13:37:30.932 DEBUG:teuthology.task.internal.check_lock:machine status is {'name': 'vm07.local', 'description': '/archive/kyr-2026-03-10_01:00:38-orch-squid-none-default-vps/1053', 'up': True, 'machine_type': 'vps', 'is_vm': True, 'vm_host': {'name': 'localhost', 'description': None, 'up': True, 'machine_type': 'libvirt', 'is_vm': False, 'vm_host': None, 'os_type': None, 'os_version': None, 'arch': None, 'locked': True, 'locked_since': None, 'locked_by': None, 'mac_address': None, 'ssh_pub_key': None}, 'os_type': 'ubuntu', 'os_version': '22.04', 'arch': 'x86_64', 'locked': True, 'locked_since': '2026-03-10 13:35:55.372160', 'locked_by': 'kyr', 'mac_address': '52:55:00:00:00:07', 'ssh_pub_key': 'ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBE1u+zt29uyXufdnTkIm1oFwpIRJVCb7+7UMvqImPOHxoC56JdkExUUsdtaVONH6IoUsZ0goggPhZAo1qXZ4Lj4='} 2026-03-10T13:37:30.936 DEBUG:teuthology.task.internal.check_lock:machine status is {'name': 'vm08.local', 'description': '/archive/kyr-2026-03-10_01:00:38-orch-squid-none-default-vps/1053', 'up': True, 'machine_type': 'vps', 'is_vm': True, 'vm_host': {'name': 'localhost', 'description': None, 'up': True, 'machine_type': 'libvirt', 'is_vm': False, 'vm_host': None, 'os_type': None, 'os_version': None, 'arch': None, 'locked': True, 'locked_since': None, 'locked_by': None, 'mac_address': None, 'ssh_pub_key': None}, 'os_type': 'ubuntu', 'os_version': '22.04', 'arch': 'x86_64', 'locked': True, 'locked_since': '2026-03-10 13:35:55.372547', 'locked_by': 'kyr', 'mac_address': '52:55:00:00:00:08', 'ssh_pub_key': 'ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBHBPesMpKNxPiLso2+LqoCPXfB1UQxnMvE1lTud48ml54yMz/79p/l+32vGXPT6rdUCThS0aMQv4GNWiFI5YyG8='} 2026-03-10T13:37:30.936 INFO:teuthology.run_tasks:Running task internal.add_remotes... 2026-03-10T13:37:30.937 INFO:teuthology.task.internal:roles: ubuntu@vm00.local - ['host.a', 'mon.a', 'mgr.a', 'osd.0'] 2026-03-10T13:37:30.937 INFO:teuthology.task.internal:roles: ubuntu@vm07.local - ['host.b', 'mon.b', 'mgr.b', 'osd.1'] 2026-03-10T13:37:30.937 INFO:teuthology.task.internal:roles: ubuntu@vm08.local - ['host.c', 'mon.c', 'osd.2'] 2026-03-10T13:37:30.937 INFO:teuthology.run_tasks:Running task console_log... 2026-03-10T13:37:30.941 DEBUG:teuthology.task.console_log:vm00 does not support IPMI; excluding 2026-03-10T13:37:30.946 DEBUG:teuthology.task.console_log:vm07 does not support IPMI; excluding 2026-03-10T13:37:30.950 DEBUG:teuthology.task.console_log:vm08 does not support IPMI; excluding 2026-03-10T13:37:30.951 DEBUG:teuthology.exit:Installing handler: Handler(exiter=, func=.kill_console_loggers at 0x7f42f13cfbe0>, signals=[15]) 2026-03-10T13:37:30.951 INFO:teuthology.run_tasks:Running task internal.connect... 2026-03-10T13:37:30.951 INFO:teuthology.task.internal:Opening connections... 2026-03-10T13:37:30.951 DEBUG:teuthology.task.internal:connecting to ubuntu@vm00.local 2026-03-10T13:37:30.952 DEBUG:teuthology.orchestra.connection:{'hostname': 'vm00.local', 'username': 'ubuntu', 'timeout': 60} 2026-03-10T13:37:31.008 DEBUG:teuthology.task.internal:connecting to ubuntu@vm07.local 2026-03-10T13:37:31.009 DEBUG:teuthology.orchestra.connection:{'hostname': 'vm07.local', 'username': 'ubuntu', 'timeout': 60} 2026-03-10T13:37:31.066 DEBUG:teuthology.task.internal:connecting to ubuntu@vm08.local 2026-03-10T13:37:31.067 DEBUG:teuthology.orchestra.connection:{'hostname': 'vm08.local', 'username': 'ubuntu', 'timeout': 60} 2026-03-10T13:37:31.125 INFO:teuthology.run_tasks:Running task internal.push_inventory... 2026-03-10T13:37:31.127 DEBUG:teuthology.orchestra.run.vm00:> uname -m 2026-03-10T13:37:31.130 INFO:teuthology.orchestra.run.vm00.stdout:x86_64 2026-03-10T13:37:31.130 DEBUG:teuthology.orchestra.run.vm00:> cat /etc/os-release 2026-03-10T13:37:31.176 INFO:teuthology.orchestra.run.vm00.stdout:PRETTY_NAME="Ubuntu 22.04.5 LTS" 2026-03-10T13:37:31.177 INFO:teuthology.orchestra.run.vm00.stdout:NAME="Ubuntu" 2026-03-10T13:37:31.177 INFO:teuthology.orchestra.run.vm00.stdout:VERSION_ID="22.04" 2026-03-10T13:37:31.177 INFO:teuthology.orchestra.run.vm00.stdout:VERSION="22.04.5 LTS (Jammy Jellyfish)" 2026-03-10T13:37:31.177 INFO:teuthology.orchestra.run.vm00.stdout:VERSION_CODENAME=jammy 2026-03-10T13:37:31.177 INFO:teuthology.orchestra.run.vm00.stdout:ID=ubuntu 2026-03-10T13:37:31.177 INFO:teuthology.orchestra.run.vm00.stdout:ID_LIKE=debian 2026-03-10T13:37:31.177 INFO:teuthology.orchestra.run.vm00.stdout:HOME_URL="https://www.ubuntu.com/" 2026-03-10T13:37:31.177 INFO:teuthology.orchestra.run.vm00.stdout:SUPPORT_URL="https://help.ubuntu.com/" 2026-03-10T13:37:31.177 INFO:teuthology.orchestra.run.vm00.stdout:BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/" 2026-03-10T13:37:31.177 INFO:teuthology.orchestra.run.vm00.stdout:PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy" 2026-03-10T13:37:31.177 INFO:teuthology.orchestra.run.vm00.stdout:UBUNTU_CODENAME=jammy 2026-03-10T13:37:31.177 INFO:teuthology.lock.ops:Updating vm00.local on lock server 2026-03-10T13:37:31.181 DEBUG:teuthology.orchestra.run.vm07:> uname -m 2026-03-10T13:37:31.184 INFO:teuthology.orchestra.run.vm07.stdout:x86_64 2026-03-10T13:37:31.185 DEBUG:teuthology.orchestra.run.vm07:> cat /etc/os-release 2026-03-10T13:37:31.230 INFO:teuthology.orchestra.run.vm07.stdout:PRETTY_NAME="Ubuntu 22.04.5 LTS" 2026-03-10T13:37:31.230 INFO:teuthology.orchestra.run.vm07.stdout:NAME="Ubuntu" 2026-03-10T13:37:31.230 INFO:teuthology.orchestra.run.vm07.stdout:VERSION_ID="22.04" 2026-03-10T13:37:31.230 INFO:teuthology.orchestra.run.vm07.stdout:VERSION="22.04.5 LTS (Jammy Jellyfish)" 2026-03-10T13:37:31.230 INFO:teuthology.orchestra.run.vm07.stdout:VERSION_CODENAME=jammy 2026-03-10T13:37:31.230 INFO:teuthology.orchestra.run.vm07.stdout:ID=ubuntu 2026-03-10T13:37:31.230 INFO:teuthology.orchestra.run.vm07.stdout:ID_LIKE=debian 2026-03-10T13:37:31.230 INFO:teuthology.orchestra.run.vm07.stdout:HOME_URL="https://www.ubuntu.com/" 2026-03-10T13:37:31.230 INFO:teuthology.orchestra.run.vm07.stdout:SUPPORT_URL="https://help.ubuntu.com/" 2026-03-10T13:37:31.231 INFO:teuthology.orchestra.run.vm07.stdout:BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/" 2026-03-10T13:37:31.231 INFO:teuthology.orchestra.run.vm07.stdout:PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy" 2026-03-10T13:37:31.231 INFO:teuthology.orchestra.run.vm07.stdout:UBUNTU_CODENAME=jammy 2026-03-10T13:37:31.231 INFO:teuthology.lock.ops:Updating vm07.local on lock server 2026-03-10T13:37:31.235 DEBUG:teuthology.orchestra.run.vm08:> uname -m 2026-03-10T13:37:31.238 INFO:teuthology.orchestra.run.vm08.stdout:x86_64 2026-03-10T13:37:31.238 DEBUG:teuthology.orchestra.run.vm08:> cat /etc/os-release 2026-03-10T13:37:31.282 INFO:teuthology.orchestra.run.vm08.stdout:PRETTY_NAME="Ubuntu 22.04.5 LTS" 2026-03-10T13:37:31.282 INFO:teuthology.orchestra.run.vm08.stdout:NAME="Ubuntu" 2026-03-10T13:37:31.282 INFO:teuthology.orchestra.run.vm08.stdout:VERSION_ID="22.04" 2026-03-10T13:37:31.282 INFO:teuthology.orchestra.run.vm08.stdout:VERSION="22.04.5 LTS (Jammy Jellyfish)" 2026-03-10T13:37:31.283 INFO:teuthology.orchestra.run.vm08.stdout:VERSION_CODENAME=jammy 2026-03-10T13:37:31.283 INFO:teuthology.orchestra.run.vm08.stdout:ID=ubuntu 2026-03-10T13:37:31.283 INFO:teuthology.orchestra.run.vm08.stdout:ID_LIKE=debian 2026-03-10T13:37:31.283 INFO:teuthology.orchestra.run.vm08.stdout:HOME_URL="https://www.ubuntu.com/" 2026-03-10T13:37:31.283 INFO:teuthology.orchestra.run.vm08.stdout:SUPPORT_URL="https://help.ubuntu.com/" 2026-03-10T13:37:31.283 INFO:teuthology.orchestra.run.vm08.stdout:BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/" 2026-03-10T13:37:31.283 INFO:teuthology.orchestra.run.vm08.stdout:PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy" 2026-03-10T13:37:31.283 INFO:teuthology.orchestra.run.vm08.stdout:UBUNTU_CODENAME=jammy 2026-03-10T13:37:31.283 INFO:teuthology.lock.ops:Updating vm08.local on lock server 2026-03-10T13:37:31.287 INFO:teuthology.run_tasks:Running task internal.serialize_remote_roles... 2026-03-10T13:37:31.289 INFO:teuthology.run_tasks:Running task internal.check_conflict... 2026-03-10T13:37:31.290 INFO:teuthology.task.internal:Checking for old test directory... 2026-03-10T13:37:31.290 DEBUG:teuthology.orchestra.run.vm00:> test '!' -e /home/ubuntu/cephtest 2026-03-10T13:37:31.291 DEBUG:teuthology.orchestra.run.vm07:> test '!' -e /home/ubuntu/cephtest 2026-03-10T13:37:31.292 DEBUG:teuthology.orchestra.run.vm08:> test '!' -e /home/ubuntu/cephtest 2026-03-10T13:37:31.326 INFO:teuthology.run_tasks:Running task internal.check_ceph_data... 2026-03-10T13:37:31.327 INFO:teuthology.task.internal:Checking for non-empty /var/lib/ceph... 2026-03-10T13:37:31.327 DEBUG:teuthology.orchestra.run.vm00:> test -z $(ls -A /var/lib/ceph) 2026-03-10T13:37:31.335 DEBUG:teuthology.orchestra.run.vm07:> test -z $(ls -A /var/lib/ceph) 2026-03-10T13:37:31.336 DEBUG:teuthology.orchestra.run.vm08:> test -z $(ls -A /var/lib/ceph) 2026-03-10T13:37:31.337 INFO:teuthology.orchestra.run.vm00.stderr:ls: cannot access '/var/lib/ceph': No such file or directory 2026-03-10T13:37:31.338 INFO:teuthology.orchestra.run.vm07.stderr:ls: cannot access '/var/lib/ceph': No such file or directory 2026-03-10T13:37:31.370 INFO:teuthology.orchestra.run.vm08.stderr:ls: cannot access '/var/lib/ceph': No such file or directory 2026-03-10T13:37:31.370 INFO:teuthology.run_tasks:Running task internal.vm_setup... 2026-03-10T13:37:31.378 DEBUG:teuthology.orchestra.run.vm00:> test -e /ceph-qa-ready 2026-03-10T13:37:31.380 DEBUG:teuthology.orchestra.run:got remote process result: 1 2026-03-10T13:37:31.739 DEBUG:teuthology.orchestra.run.vm07:> test -e /ceph-qa-ready 2026-03-10T13:37:31.741 DEBUG:teuthology.orchestra.run:got remote process result: 1 2026-03-10T13:37:31.972 DEBUG:teuthology.orchestra.run.vm08:> test -e /ceph-qa-ready 2026-03-10T13:37:31.975 DEBUG:teuthology.orchestra.run:got remote process result: 1 2026-03-10T13:37:32.206 INFO:teuthology.run_tasks:Running task internal.base... 2026-03-10T13:37:32.210 INFO:teuthology.task.internal:Creating test directory... 2026-03-10T13:37:32.210 DEBUG:teuthology.orchestra.run.vm00:> mkdir -p -m0755 -- /home/ubuntu/cephtest 2026-03-10T13:37:32.211 DEBUG:teuthology.orchestra.run.vm07:> mkdir -p -m0755 -- /home/ubuntu/cephtest 2026-03-10T13:37:32.212 DEBUG:teuthology.orchestra.run.vm08:> mkdir -p -m0755 -- /home/ubuntu/cephtest 2026-03-10T13:37:32.214 INFO:teuthology.run_tasks:Running task internal.archive_upload... 2026-03-10T13:37:32.216 INFO:teuthology.run_tasks:Running task internal.archive... 2026-03-10T13:37:32.220 INFO:teuthology.task.internal:Creating archive directory... 2026-03-10T13:37:32.220 DEBUG:teuthology.orchestra.run.vm00:> install -d -m0755 -- /home/ubuntu/cephtest/archive 2026-03-10T13:37:32.254 DEBUG:teuthology.orchestra.run.vm07:> install -d -m0755 -- /home/ubuntu/cephtest/archive 2026-03-10T13:37:32.256 DEBUG:teuthology.orchestra.run.vm08:> install -d -m0755 -- /home/ubuntu/cephtest/archive 2026-03-10T13:37:32.260 INFO:teuthology.run_tasks:Running task internal.coredump... 2026-03-10T13:37:32.261 INFO:teuthology.task.internal:Enabling coredump saving... 2026-03-10T13:37:32.261 DEBUG:teuthology.orchestra.run.vm00:> test -f /run/.containerenv -o -f /.dockerenv 2026-03-10T13:37:32.300 DEBUG:teuthology.orchestra.run:got remote process result: 1 2026-03-10T13:37:32.300 DEBUG:teuthology.orchestra.run.vm07:> test -f /run/.containerenv -o -f /.dockerenv 2026-03-10T13:37:32.302 DEBUG:teuthology.orchestra.run:got remote process result: 1 2026-03-10T13:37:32.302 DEBUG:teuthology.orchestra.run.vm08:> test -f /run/.containerenv -o -f /.dockerenv 2026-03-10T13:37:32.304 DEBUG:teuthology.orchestra.run:got remote process result: 1 2026-03-10T13:37:32.305 DEBUG:teuthology.orchestra.run.vm00:> install -d -m0755 -- /home/ubuntu/cephtest/archive/coredump && sudo sysctl -w kernel.core_pattern=/home/ubuntu/cephtest/archive/coredump/%t.%p.core && echo kernel.core_pattern=/home/ubuntu/cephtest/archive/coredump/%t.%p.core | sudo tee -a /etc/sysctl.conf 2026-03-10T13:37:32.342 DEBUG:teuthology.orchestra.run.vm07:> install -d -m0755 -- /home/ubuntu/cephtest/archive/coredump && sudo sysctl -w kernel.core_pattern=/home/ubuntu/cephtest/archive/coredump/%t.%p.core && echo kernel.core_pattern=/home/ubuntu/cephtest/archive/coredump/%t.%p.core | sudo tee -a /etc/sysctl.conf 2026-03-10T13:37:32.344 DEBUG:teuthology.orchestra.run.vm08:> install -d -m0755 -- /home/ubuntu/cephtest/archive/coredump && sudo sysctl -w kernel.core_pattern=/home/ubuntu/cephtest/archive/coredump/%t.%p.core && echo kernel.core_pattern=/home/ubuntu/cephtest/archive/coredump/%t.%p.core | sudo tee -a /etc/sysctl.conf 2026-03-10T13:37:32.349 INFO:teuthology.orchestra.run.vm00.stdout:kernel.core_pattern = /home/ubuntu/cephtest/archive/coredump/%t.%p.core 2026-03-10T13:37:32.352 INFO:teuthology.orchestra.run.vm07.stdout:kernel.core_pattern = /home/ubuntu/cephtest/archive/coredump/%t.%p.core 2026-03-10T13:37:32.354 INFO:teuthology.orchestra.run.vm00.stdout:kernel.core_pattern=/home/ubuntu/cephtest/archive/coredump/%t.%p.core 2026-03-10T13:37:32.355 INFO:teuthology.orchestra.run.vm08.stdout:kernel.core_pattern = /home/ubuntu/cephtest/archive/coredump/%t.%p.core 2026-03-10T13:37:32.357 INFO:teuthology.orchestra.run.vm07.stdout:kernel.core_pattern=/home/ubuntu/cephtest/archive/coredump/%t.%p.core 2026-03-10T13:37:32.360 INFO:teuthology.orchestra.run.vm08.stdout:kernel.core_pattern=/home/ubuntu/cephtest/archive/coredump/%t.%p.core 2026-03-10T13:37:32.361 INFO:teuthology.run_tasks:Running task internal.sudo... 2026-03-10T13:37:32.367 INFO:teuthology.task.internal:Configuring sudo... 2026-03-10T13:37:32.367 DEBUG:teuthology.orchestra.run.vm00:> sudo sed -i.orig.teuthology -e 's/^\([^#]*\) \(requiretty\)/\1 !\2/g' -e 's/^\([^#]*\) !\(visiblepw\)/\1 \2/g' /etc/sudoers 2026-03-10T13:37:32.398 DEBUG:teuthology.orchestra.run.vm07:> sudo sed -i.orig.teuthology -e 's/^\([^#]*\) \(requiretty\)/\1 !\2/g' -e 's/^\([^#]*\) !\(visiblepw\)/\1 \2/g' /etc/sudoers 2026-03-10T13:37:32.400 DEBUG:teuthology.orchestra.run.vm08:> sudo sed -i.orig.teuthology -e 's/^\([^#]*\) \(requiretty\)/\1 !\2/g' -e 's/^\([^#]*\) !\(visiblepw\)/\1 \2/g' /etc/sudoers 2026-03-10T13:37:32.410 INFO:teuthology.run_tasks:Running task internal.syslog... 2026-03-10T13:37:32.412 INFO:teuthology.task.internal.syslog:Starting syslog monitoring... 2026-03-10T13:37:32.412 DEBUG:teuthology.orchestra.run.vm00:> mkdir -p -m0755 -- /home/ubuntu/cephtest/archive/syslog 2026-03-10T13:37:32.450 DEBUG:teuthology.orchestra.run.vm07:> mkdir -p -m0755 -- /home/ubuntu/cephtest/archive/syslog 2026-03-10T13:37:32.452 DEBUG:teuthology.orchestra.run.vm08:> mkdir -p -m0755 -- /home/ubuntu/cephtest/archive/syslog 2026-03-10T13:37:32.455 DEBUG:teuthology.orchestra.run.vm00:> install -m 666 /dev/null /home/ubuntu/cephtest/archive/syslog/kern.log 2026-03-10T13:37:32.496 DEBUG:teuthology.orchestra.run.vm00:> install -m 666 /dev/null /home/ubuntu/cephtest/archive/syslog/misc.log 2026-03-10T13:37:32.540 DEBUG:teuthology.orchestra.run.vm00:> set -ex 2026-03-10T13:37:32.540 DEBUG:teuthology.orchestra.run.vm00:> sudo dd of=/etc/rsyslog.d/80-cephtest.conf 2026-03-10T13:37:32.589 DEBUG:teuthology.orchestra.run.vm07:> install -m 666 /dev/null /home/ubuntu/cephtest/archive/syslog/kern.log 2026-03-10T13:37:32.593 DEBUG:teuthology.orchestra.run.vm07:> install -m 666 /dev/null /home/ubuntu/cephtest/archive/syslog/misc.log 2026-03-10T13:37:32.638 DEBUG:teuthology.orchestra.run.vm07:> set -ex 2026-03-10T13:37:32.638 DEBUG:teuthology.orchestra.run.vm07:> sudo dd of=/etc/rsyslog.d/80-cephtest.conf 2026-03-10T13:37:32.691 DEBUG:teuthology.orchestra.run.vm08:> install -m 666 /dev/null /home/ubuntu/cephtest/archive/syslog/kern.log 2026-03-10T13:37:32.694 DEBUG:teuthology.orchestra.run.vm08:> install -m 666 /dev/null /home/ubuntu/cephtest/archive/syslog/misc.log 2026-03-10T13:37:32.738 DEBUG:teuthology.orchestra.run.vm08:> set -ex 2026-03-10T13:37:32.738 DEBUG:teuthology.orchestra.run.vm08:> sudo dd of=/etc/rsyslog.d/80-cephtest.conf 2026-03-10T13:37:32.787 DEBUG:teuthology.orchestra.run.vm00:> sudo service rsyslog restart 2026-03-10T13:37:32.788 DEBUG:teuthology.orchestra.run.vm07:> sudo service rsyslog restart 2026-03-10T13:37:32.789 DEBUG:teuthology.orchestra.run.vm08:> sudo service rsyslog restart 2026-03-10T13:37:32.846 INFO:teuthology.run_tasks:Running task internal.timer... 2026-03-10T13:37:32.848 INFO:teuthology.task.internal:Starting timer... 2026-03-10T13:37:32.848 INFO:teuthology.run_tasks:Running task pcp... 2026-03-10T13:37:32.850 INFO:teuthology.run_tasks:Running task selinux... 2026-03-10T13:37:32.853 INFO:teuthology.task.selinux:Excluding vm00: VMs are not yet supported 2026-03-10T13:37:32.853 INFO:teuthology.task.selinux:Excluding vm07: VMs are not yet supported 2026-03-10T13:37:32.853 INFO:teuthology.task.selinux:Excluding vm08: VMs are not yet supported 2026-03-10T13:37:32.853 DEBUG:teuthology.task.selinux:Getting current SELinux state 2026-03-10T13:37:32.853 DEBUG:teuthology.task.selinux:Existing SELinux modes: {} 2026-03-10T13:37:32.853 INFO:teuthology.task.selinux:Putting SELinux into permissive mode 2026-03-10T13:37:32.853 INFO:teuthology.run_tasks:Running task ansible.cephlab... 2026-03-10T13:37:32.854 DEBUG:teuthology.task:Applying overrides for task ansible.cephlab: {'branch': 'main', 'skip_tags': 'nagios,monitoring-scripts,hostname,pubkeys,zap,sudoers,kerberos,ntp-client,resolvconf,cpan,nfs', 'vars': {'timezone': 'UTC'}} 2026-03-10T13:37:32.854 DEBUG:teuthology.repo_utils:Setting repo remote to https://github.com/ceph/ceph-cm-ansible.git 2026-03-10T13:37:32.856 INFO:teuthology.repo_utils:Fetching github.com_ceph_ceph-cm-ansible_main from origin 2026-03-10T13:37:33.347 DEBUG:teuthology.repo_utils:Resetting repo at /home/teuthos/src/github.com_ceph_ceph-cm-ansible_main to origin/main 2026-03-10T13:37:33.352 INFO:teuthology.task.ansible:Playbook: [{'import_playbook': 'ansible_managed.yml'}, {'import_playbook': 'teuthology.yml'}, {'hosts': 'testnodes', 'tasks': [{'set_fact': {'ran_from_cephlab_playbook': True}}]}, {'import_playbook': 'testnodes.yml'}, {'import_playbook': 'container-host.yml'}, {'import_playbook': 'cobbler.yml'}, {'import_playbook': 'paddles.yml'}, {'import_playbook': 'pulpito.yml'}, {'hosts': 'testnodes', 'become': True, 'tasks': [{'name': 'Touch /ceph-qa-ready', 'file': {'path': '/ceph-qa-ready', 'state': 'touch'}, 'when': 'ran_from_cephlab_playbook|bool'}]}] 2026-03-10T13:37:33.352 DEBUG:teuthology.task.ansible:Running ansible-playbook -v --extra-vars '{"ansible_ssh_user": "ubuntu", "timezone": "UTC"}' -i /tmp/teuth_ansible_inventoryw9uda_nm --limit vm00.local,vm07.local,vm08.local /home/teuthos/src/github.com_ceph_ceph-cm-ansible_main/cephlab.yml --skip-tags nagios,monitoring-scripts,hostname,pubkeys,zap,sudoers,kerberos,ntp-client,resolvconf,cpan,nfs 2026-03-10T13:40:05.926 DEBUG:teuthology.task.ansible:Reconnecting to [Remote(name='ubuntu@vm00.local'), Remote(name='ubuntu@vm07.local'), Remote(name='ubuntu@vm08.local')] 2026-03-10T13:40:05.926 INFO:teuthology.orchestra.remote:Trying to reconnect to host 'ubuntu@vm00.local' 2026-03-10T13:40:05.927 DEBUG:teuthology.orchestra.connection:{'hostname': 'vm00.local', 'username': 'ubuntu', 'timeout': 60} 2026-03-10T13:40:05.988 DEBUG:teuthology.orchestra.run.vm00:> true 2026-03-10T13:40:06.225 INFO:teuthology.orchestra.remote:Successfully reconnected to host 'ubuntu@vm00.local' 2026-03-10T13:40:06.225 INFO:teuthology.orchestra.remote:Trying to reconnect to host 'ubuntu@vm07.local' 2026-03-10T13:40:06.225 DEBUG:teuthology.orchestra.connection:{'hostname': 'vm07.local', 'username': 'ubuntu', 'timeout': 60} 2026-03-10T13:40:06.289 DEBUG:teuthology.orchestra.run.vm07:> true 2026-03-10T13:40:06.509 INFO:teuthology.orchestra.remote:Successfully reconnected to host 'ubuntu@vm07.local' 2026-03-10T13:40:06.509 INFO:teuthology.orchestra.remote:Trying to reconnect to host 'ubuntu@vm08.local' 2026-03-10T13:40:06.509 DEBUG:teuthology.orchestra.connection:{'hostname': 'vm08.local', 'username': 'ubuntu', 'timeout': 60} 2026-03-10T13:40:06.577 DEBUG:teuthology.orchestra.run.vm08:> true 2026-03-10T13:40:06.800 INFO:teuthology.orchestra.remote:Successfully reconnected to host 'ubuntu@vm08.local' 2026-03-10T13:40:06.801 INFO:teuthology.run_tasks:Running task clock... 2026-03-10T13:40:06.803 INFO:teuthology.task.clock:Syncing clocks and checking initial clock skew... 2026-03-10T13:40:06.803 INFO:teuthology.orchestra.run:Running command with timeout 360 2026-03-10T13:40:06.803 DEBUG:teuthology.orchestra.run.vm00:> sudo systemctl stop ntp.service || sudo systemctl stop ntpd.service || sudo systemctl stop chronyd.service ; sudo ntpd -gq || sudo chronyc makestep ; sudo systemctl start ntp.service || sudo systemctl start ntpd.service || sudo systemctl start chronyd.service ; PATH=/usr/bin:/usr/sbin ntpq -p || PATH=/usr/bin:/usr/sbin chronyc sources || true 2026-03-10T13:40:06.805 INFO:teuthology.orchestra.run:Running command with timeout 360 2026-03-10T13:40:06.805 DEBUG:teuthology.orchestra.run.vm07:> sudo systemctl stop ntp.service || sudo systemctl stop ntpd.service || sudo systemctl stop chronyd.service ; sudo ntpd -gq || sudo chronyc makestep ; sudo systemctl start ntp.service || sudo systemctl start ntpd.service || sudo systemctl start chronyd.service ; PATH=/usr/bin:/usr/sbin ntpq -p || PATH=/usr/bin:/usr/sbin chronyc sources || true 2026-03-10T13:40:06.806 INFO:teuthology.orchestra.run:Running command with timeout 360 2026-03-10T13:40:06.806 DEBUG:teuthology.orchestra.run.vm08:> sudo systemctl stop ntp.service || sudo systemctl stop ntpd.service || sudo systemctl stop chronyd.service ; sudo ntpd -gq || sudo chronyc makestep ; sudo systemctl start ntp.service || sudo systemctl start ntpd.service || sudo systemctl start chronyd.service ; PATH=/usr/bin:/usr/sbin ntpq -p || PATH=/usr/bin:/usr/sbin chronyc sources || true 2026-03-10T13:40:06.819 INFO:teuthology.orchestra.run.vm00.stdout:10 Mar 13:40:06 ntpd[16119]: ntpd 4.2.8p15@1.3728-o Wed Feb 16 17:13:02 UTC 2022 (1): Starting 2026-03-10T13:40:06.819 INFO:teuthology.orchestra.run.vm00.stdout:10 Mar 13:40:06 ntpd[16119]: Command line: ntpd -gq 2026-03-10T13:40:06.819 INFO:teuthology.orchestra.run.vm00.stdout:10 Mar 13:40:06 ntpd[16119]: ---------------------------------------------------- 2026-03-10T13:40:06.819 INFO:teuthology.orchestra.run.vm00.stdout:10 Mar 13:40:06 ntpd[16119]: ntp-4 is maintained by Network Time Foundation, 2026-03-10T13:40:06.819 INFO:teuthology.orchestra.run.vm00.stdout:10 Mar 13:40:06 ntpd[16119]: Inc. (NTF), a non-profit 501(c)(3) public-benefit 2026-03-10T13:40:06.819 INFO:teuthology.orchestra.run.vm00.stdout:10 Mar 13:40:06 ntpd[16119]: corporation. Support and training for ntp-4 are 2026-03-10T13:40:06.819 INFO:teuthology.orchestra.run.vm00.stdout:10 Mar 13:40:06 ntpd[16119]: available at https://www.nwtime.org/support 2026-03-10T13:40:06.819 INFO:teuthology.orchestra.run.vm00.stdout:10 Mar 13:40:06 ntpd[16119]: ---------------------------------------------------- 2026-03-10T13:40:06.820 INFO:teuthology.orchestra.run.vm00.stdout:10 Mar 13:40:06 ntpd[16119]: proto: precision = 0.040 usec (-24) 2026-03-10T13:40:06.820 INFO:teuthology.orchestra.run.vm00.stdout:10 Mar 13:40:06 ntpd[16119]: basedate set to 2022-02-04 2026-03-10T13:40:06.820 INFO:teuthology.orchestra.run.vm00.stdout:10 Mar 13:40:06 ntpd[16119]: gps base set to 2022-02-06 (week 2196) 2026-03-10T13:40:06.820 INFO:teuthology.orchestra.run.vm00.stdout:10 Mar 13:40:06 ntpd[16119]: leapsecond file ('/usr/share/zoneinfo/leap-seconds.list'): good hash signature 2026-03-10T13:40:06.820 INFO:teuthology.orchestra.run.vm00.stdout:10 Mar 13:40:06 ntpd[16119]: leapsecond file ('/usr/share/zoneinfo/leap-seconds.list'): loaded, expire=2025-12-28T00:00:00Z last=2017-01-01T00:00:00Z ofs=37 2026-03-10T13:40:06.821 INFO:teuthology.orchestra.run.vm00.stderr:10 Mar 13:40:06 ntpd[16119]: leapsecond file ('/usr/share/zoneinfo/leap-seconds.list'): expired 73 days ago 2026-03-10T13:40:06.821 INFO:teuthology.orchestra.run.vm00.stdout:10 Mar 13:40:06 ntpd[16119]: Listen and drop on 0 v6wildcard [::]:123 2026-03-10T13:40:06.821 INFO:teuthology.orchestra.run.vm00.stdout:10 Mar 13:40:06 ntpd[16119]: Listen and drop on 1 v4wildcard 0.0.0.0:123 2026-03-10T13:40:06.822 INFO:teuthology.orchestra.run.vm07.stdout:10 Mar 13:40:06 ntpd[16106]: ntpd 4.2.8p15@1.3728-o Wed Feb 16 17:13:02 UTC 2022 (1): Starting 2026-03-10T13:40:06.822 INFO:teuthology.orchestra.run.vm07.stdout:10 Mar 13:40:06 ntpd[16106]: Command line: ntpd -gq 2026-03-10T13:40:06.822 INFO:teuthology.orchestra.run.vm07.stdout:10 Mar 13:40:06 ntpd[16106]: ---------------------------------------------------- 2026-03-10T13:40:06.822 INFO:teuthology.orchestra.run.vm07.stdout:10 Mar 13:40:06 ntpd[16106]: ntp-4 is maintained by Network Time Foundation, 2026-03-10T13:40:06.822 INFO:teuthology.orchestra.run.vm07.stdout:10 Mar 13:40:06 ntpd[16106]: Inc. (NTF), a non-profit 501(c)(3) public-benefit 2026-03-10T13:40:06.822 INFO:teuthology.orchestra.run.vm00.stdout:10 Mar 13:40:06 ntpd[16119]: Listen normally on 2 lo 127.0.0.1:123 2026-03-10T13:40:06.822 INFO:teuthology.orchestra.run.vm00.stdout:10 Mar 13:40:06 ntpd[16119]: Listen normally on 3 ens3 192.168.123.100:123 2026-03-10T13:40:06.822 INFO:teuthology.orchestra.run.vm00.stdout:10 Mar 13:40:06 ntpd[16119]: Listen normally on 4 lo [::1]:123 2026-03-10T13:40:06.822 INFO:teuthology.orchestra.run.vm07.stdout:10 Mar 13:40:06 ntpd[16106]: corporation. Support and training for ntp-4 are 2026-03-10T13:40:06.822 INFO:teuthology.orchestra.run.vm07.stdout:10 Mar 13:40:06 ntpd[16106]: available at https://www.nwtime.org/support 2026-03-10T13:40:06.822 INFO:teuthology.orchestra.run.vm07.stdout:10 Mar 13:40:06 ntpd[16106]: ---------------------------------------------------- 2026-03-10T13:40:06.823 INFO:teuthology.orchestra.run.vm00.stdout:10 Mar 13:40:06 ntpd[16119]: Listen normally on 5 ens3 [fe80::5055:ff:fe00:0%2]:123 2026-03-10T13:40:06.823 INFO:teuthology.orchestra.run.vm00.stdout:10 Mar 13:40:06 ntpd[16119]: Listening on routing socket on fd #22 for interface updates 2026-03-10T13:40:06.823 INFO:teuthology.orchestra.run.vm07.stdout:10 Mar 13:40:06 ntpd[16106]: proto: precision = 0.029 usec (-25) 2026-03-10T13:40:06.823 INFO:teuthology.orchestra.run.vm07.stdout:10 Mar 13:40:06 ntpd[16106]: basedate set to 2022-02-04 2026-03-10T13:40:06.823 INFO:teuthology.orchestra.run.vm07.stdout:10 Mar 13:40:06 ntpd[16106]: gps base set to 2022-02-06 (week 2196) 2026-03-10T13:40:06.823 INFO:teuthology.orchestra.run.vm07.stdout:10 Mar 13:40:06 ntpd[16106]: leapsecond file ('/usr/share/zoneinfo/leap-seconds.list'): good hash signature 2026-03-10T13:40:06.823 INFO:teuthology.orchestra.run.vm07.stdout:10 Mar 13:40:06 ntpd[16106]: leapsecond file ('/usr/share/zoneinfo/leap-seconds.list'): loaded, expire=2025-12-28T00:00:00Z last=2017-01-01T00:00:00Z ofs=37 2026-03-10T13:40:06.823 INFO:teuthology.orchestra.run.vm07.stderr:10 Mar 13:40:06 ntpd[16106]: leapsecond file ('/usr/share/zoneinfo/leap-seconds.list'): expired 73 days ago 2026-03-10T13:40:06.824 INFO:teuthology.orchestra.run.vm07.stdout:10 Mar 13:40:06 ntpd[16106]: Listen and drop on 0 v6wildcard [::]:123 2026-03-10T13:40:06.824 INFO:teuthology.orchestra.run.vm07.stdout:10 Mar 13:40:06 ntpd[16106]: Listen and drop on 1 v4wildcard 0.0.0.0:123 2026-03-10T13:40:06.824 INFO:teuthology.orchestra.run.vm07.stdout:10 Mar 13:40:06 ntpd[16106]: Listen normally on 2 lo 127.0.0.1:123 2026-03-10T13:40:06.824 INFO:teuthology.orchestra.run.vm07.stdout:10 Mar 13:40:06 ntpd[16106]: Listen normally on 3 ens3 192.168.123.107:123 2026-03-10T13:40:06.824 INFO:teuthology.orchestra.run.vm07.stdout:10 Mar 13:40:06 ntpd[16106]: Listen normally on 4 lo [::1]:123 2026-03-10T13:40:06.824 INFO:teuthology.orchestra.run.vm07.stdout:10 Mar 13:40:06 ntpd[16106]: Listen normally on 5 ens3 [fe80::5055:ff:fe00:7%2]:123 2026-03-10T13:40:06.824 INFO:teuthology.orchestra.run.vm07.stdout:10 Mar 13:40:06 ntpd[16106]: Listening on routing socket on fd #22 for interface updates 2026-03-10T13:40:06.861 INFO:teuthology.orchestra.run.vm08.stdout:10 Mar 13:40:06 ntpd[16109]: ntpd 4.2.8p15@1.3728-o Wed Feb 16 17:13:02 UTC 2022 (1): Starting 2026-03-10T13:40:06.861 INFO:teuthology.orchestra.run.vm08.stdout:10 Mar 13:40:06 ntpd[16109]: Command line: ntpd -gq 2026-03-10T13:40:06.861 INFO:teuthology.orchestra.run.vm08.stdout:10 Mar 13:40:06 ntpd[16109]: ---------------------------------------------------- 2026-03-10T13:40:06.861 INFO:teuthology.orchestra.run.vm08.stdout:10 Mar 13:40:06 ntpd[16109]: ntp-4 is maintained by Network Time Foundation, 2026-03-10T13:40:06.861 INFO:teuthology.orchestra.run.vm08.stdout:10 Mar 13:40:06 ntpd[16109]: Inc. (NTF), a non-profit 501(c)(3) public-benefit 2026-03-10T13:40:06.861 INFO:teuthology.orchestra.run.vm08.stdout:10 Mar 13:40:06 ntpd[16109]: corporation. Support and training for ntp-4 are 2026-03-10T13:40:06.861 INFO:teuthology.orchestra.run.vm08.stdout:10 Mar 13:40:06 ntpd[16109]: available at https://www.nwtime.org/support 2026-03-10T13:40:06.861 INFO:teuthology.orchestra.run.vm08.stdout:10 Mar 13:40:06 ntpd[16109]: ---------------------------------------------------- 2026-03-10T13:40:06.861 INFO:teuthology.orchestra.run.vm08.stdout:10 Mar 13:40:06 ntpd[16109]: proto: precision = 0.029 usec (-25) 2026-03-10T13:40:06.861 INFO:teuthology.orchestra.run.vm08.stdout:10 Mar 13:40:06 ntpd[16109]: basedate set to 2022-02-04 2026-03-10T13:40:06.861 INFO:teuthology.orchestra.run.vm08.stdout:10 Mar 13:40:06 ntpd[16109]: gps base set to 2022-02-06 (week 2196) 2026-03-10T13:40:06.862 INFO:teuthology.orchestra.run.vm08.stdout:10 Mar 13:40:06 ntpd[16109]: leapsecond file ('/usr/share/zoneinfo/leap-seconds.list'): good hash signature 2026-03-10T13:40:06.862 INFO:teuthology.orchestra.run.vm08.stdout:10 Mar 13:40:06 ntpd[16109]: leapsecond file ('/usr/share/zoneinfo/leap-seconds.list'): loaded, expire=2025-12-28T00:00:00Z last=2017-01-01T00:00:00Z ofs=37 2026-03-10T13:40:06.862 INFO:teuthology.orchestra.run.vm08.stderr:10 Mar 13:40:06 ntpd[16109]: leapsecond file ('/usr/share/zoneinfo/leap-seconds.list'): expired 73 days ago 2026-03-10T13:40:06.862 INFO:teuthology.orchestra.run.vm08.stdout:10 Mar 13:40:06 ntpd[16109]: Listen and drop on 0 v6wildcard [::]:123 2026-03-10T13:40:06.862 INFO:teuthology.orchestra.run.vm08.stdout:10 Mar 13:40:06 ntpd[16109]: Listen and drop on 1 v4wildcard 0.0.0.0:123 2026-03-10T13:40:06.863 INFO:teuthology.orchestra.run.vm08.stdout:10 Mar 13:40:06 ntpd[16109]: Listen normally on 2 lo 127.0.0.1:123 2026-03-10T13:40:06.863 INFO:teuthology.orchestra.run.vm08.stdout:10 Mar 13:40:06 ntpd[16109]: Listen normally on 3 ens3 192.168.123.108:123 2026-03-10T13:40:06.863 INFO:teuthology.orchestra.run.vm08.stdout:10 Mar 13:40:06 ntpd[16109]: Listen normally on 4 lo [::1]:123 2026-03-10T13:40:06.863 INFO:teuthology.orchestra.run.vm08.stdout:10 Mar 13:40:06 ntpd[16109]: Listen normally on 5 ens3 [fe80::5055:ff:fe00:8%2]:123 2026-03-10T13:40:06.863 INFO:teuthology.orchestra.run.vm08.stdout:10 Mar 13:40:06 ntpd[16109]: Listening on routing socket on fd #22 for interface updates 2026-03-10T13:40:07.821 INFO:teuthology.orchestra.run.vm00.stdout:10 Mar 13:40:07 ntpd[16119]: Soliciting pool server 152.53.184.199 2026-03-10T13:40:07.823 INFO:teuthology.orchestra.run.vm07.stdout:10 Mar 13:40:07 ntpd[16106]: Soliciting pool server 152.53.184.199 2026-03-10T13:40:07.862 INFO:teuthology.orchestra.run.vm08.stdout:10 Mar 13:40:07 ntpd[16109]: Soliciting pool server 116.203.96.227 2026-03-10T13:40:08.820 INFO:teuthology.orchestra.run.vm00.stdout:10 Mar 13:40:08 ntpd[16119]: Soliciting pool server 162.159.200.123 2026-03-10T13:40:08.821 INFO:teuthology.orchestra.run.vm00.stdout:10 Mar 13:40:08 ntpd[16119]: Soliciting pool server 91.98.156.7 2026-03-10T13:40:08.822 INFO:teuthology.orchestra.run.vm07.stdout:10 Mar 13:40:08 ntpd[16106]: Soliciting pool server 162.159.200.123 2026-03-10T13:40:08.823 INFO:teuthology.orchestra.run.vm07.stdout:10 Mar 13:40:08 ntpd[16106]: Soliciting pool server 91.98.156.7 2026-03-10T13:40:08.862 INFO:teuthology.orchestra.run.vm08.stdout:10 Mar 13:40:08 ntpd[16109]: Soliciting pool server 152.53.184.199 2026-03-10T13:40:08.862 INFO:teuthology.orchestra.run.vm08.stdout:10 Mar 13:40:08 ntpd[16109]: Soliciting pool server 185.248.189.10 2026-03-10T13:40:09.821 INFO:teuthology.orchestra.run.vm00.stdout:10 Mar 13:40:09 ntpd[16119]: Soliciting pool server 116.203.218.109 2026-03-10T13:40:09.821 INFO:teuthology.orchestra.run.vm00.stdout:10 Mar 13:40:09 ntpd[16119]: Soliciting pool server 195.201.20.16 2026-03-10T13:40:09.821 INFO:teuthology.orchestra.run.vm00.stdout:10 Mar 13:40:09 ntpd[16119]: Soliciting pool server 158.180.28.150 2026-03-10T13:40:09.822 INFO:teuthology.orchestra.run.vm07.stdout:10 Mar 13:40:09 ntpd[16106]: Soliciting pool server 116.203.218.109 2026-03-10T13:40:09.823 INFO:teuthology.orchestra.run.vm07.stdout:10 Mar 13:40:09 ntpd[16106]: Soliciting pool server 195.201.20.16 2026-03-10T13:40:09.823 INFO:teuthology.orchestra.run.vm07.stdout:10 Mar 13:40:09 ntpd[16106]: Soliciting pool server 158.180.28.150 2026-03-10T13:40:09.862 INFO:teuthology.orchestra.run.vm08.stdout:10 Mar 13:40:09 ntpd[16109]: Soliciting pool server 91.98.156.7 2026-03-10T13:40:09.862 INFO:teuthology.orchestra.run.vm08.stdout:10 Mar 13:40:09 ntpd[16109]: Soliciting pool server 162.159.200.123 2026-03-10T13:40:09.862 INFO:teuthology.orchestra.run.vm08.stdout:10 Mar 13:40:09 ntpd[16109]: Soliciting pool server 134.60.111.110 2026-03-10T13:40:10.820 INFO:teuthology.orchestra.run.vm00.stdout:10 Mar 13:40:10 ntpd[16119]: Soliciting pool server 94.130.23.46 2026-03-10T13:40:10.820 INFO:teuthology.orchestra.run.vm00.stdout:10 Mar 13:40:10 ntpd[16119]: Soliciting pool server 82.165.178.31 2026-03-10T13:40:10.821 INFO:teuthology.orchestra.run.vm00.stdout:10 Mar 13:40:10 ntpd[16119]: Soliciting pool server 116.203.96.227 2026-03-10T13:40:10.821 INFO:teuthology.orchestra.run.vm00.stdout:10 Mar 13:40:10 ntpd[16119]: Soliciting pool server 88.99.76.254 2026-03-10T13:40:10.822 INFO:teuthology.orchestra.run.vm07.stdout:10 Mar 13:40:10 ntpd[16106]: Soliciting pool server 94.130.23.46 2026-03-10T13:40:10.822 INFO:teuthology.orchestra.run.vm07.stdout:10 Mar 13:40:10 ntpd[16106]: Soliciting pool server 82.165.178.31 2026-03-10T13:40:10.823 INFO:teuthology.orchestra.run.vm07.stdout:10 Mar 13:40:10 ntpd[16106]: Soliciting pool server 116.203.96.227 2026-03-10T13:40:10.823 INFO:teuthology.orchestra.run.vm07.stdout:10 Mar 13:40:10 ntpd[16106]: Soliciting pool server 88.99.76.254 2026-03-10T13:40:10.861 INFO:teuthology.orchestra.run.vm08.stdout:10 Mar 13:40:10 ntpd[16109]: Soliciting pool server 158.180.28.150 2026-03-10T13:40:10.862 INFO:teuthology.orchestra.run.vm08.stdout:10 Mar 13:40:10 ntpd[16109]: Soliciting pool server 116.203.218.109 2026-03-10T13:40:10.862 INFO:teuthology.orchestra.run.vm08.stdout:10 Mar 13:40:10 ntpd[16109]: Soliciting pool server 195.201.20.16 2026-03-10T13:40:10.862 INFO:teuthology.orchestra.run.vm08.stdout:10 Mar 13:40:10 ntpd[16109]: Soliciting pool server 144.91.126.59 2026-03-10T13:40:11.820 INFO:teuthology.orchestra.run.vm00.stdout:10 Mar 13:40:11 ntpd[16119]: Soliciting pool server 195.201.125.53 2026-03-10T13:40:11.820 INFO:teuthology.orchestra.run.vm00.stdout:10 Mar 13:40:11 ntpd[16119]: Soliciting pool server 134.60.1.30 2026-03-10T13:40:11.821 INFO:teuthology.orchestra.run.vm00.stdout:10 Mar 13:40:11 ntpd[16119]: Soliciting pool server 185.248.189.10 2026-03-10T13:40:11.821 INFO:teuthology.orchestra.run.vm00.stdout:10 Mar 13:40:11 ntpd[16119]: Soliciting pool server 91.189.91.157 2026-03-10T13:40:11.822 INFO:teuthology.orchestra.run.vm07.stdout:10 Mar 13:40:11 ntpd[16106]: Soliciting pool server 195.201.125.53 2026-03-10T13:40:11.823 INFO:teuthology.orchestra.run.vm07.stdout:10 Mar 13:40:11 ntpd[16106]: Soliciting pool server 134.60.1.30 2026-03-10T13:40:11.823 INFO:teuthology.orchestra.run.vm07.stdout:10 Mar 13:40:11 ntpd[16106]: Soliciting pool server 185.248.189.10 2026-03-10T13:40:11.823 INFO:teuthology.orchestra.run.vm07.stdout:10 Mar 13:40:11 ntpd[16106]: Soliciting pool server 91.189.91.157 2026-03-10T13:40:11.862 INFO:teuthology.orchestra.run.vm08.stdout:10 Mar 13:40:11 ntpd[16109]: Soliciting pool server 88.99.76.254 2026-03-10T13:40:11.862 INFO:teuthology.orchestra.run.vm08.stdout:10 Mar 13:40:11 ntpd[16109]: Soliciting pool server 94.130.23.46 2026-03-10T13:40:11.862 INFO:teuthology.orchestra.run.vm08.stdout:10 Mar 13:40:11 ntpd[16109]: Soliciting pool server 185.125.190.56 2026-03-10T13:40:12.820 INFO:teuthology.orchestra.run.vm00.stdout:10 Mar 13:40:12 ntpd[16119]: Soliciting pool server 185.125.190.57 2026-03-10T13:40:12.820 INFO:teuthology.orchestra.run.vm00.stdout:10 Mar 13:40:12 ntpd[16119]: Soliciting pool server 178.254.28.54 2026-03-10T13:40:12.821 INFO:teuthology.orchestra.run.vm00.stdout:10 Mar 13:40:12 ntpd[16119]: Soliciting pool server 134.60.111.110 2026-03-10T13:40:12.822 INFO:teuthology.orchestra.run.vm07.stdout:10 Mar 13:40:12 ntpd[16106]: Soliciting pool server 185.125.190.57 2026-03-10T13:40:12.822 INFO:teuthology.orchestra.run.vm07.stdout:10 Mar 13:40:12 ntpd[16106]: Soliciting pool server 178.254.28.54 2026-03-10T13:40:12.823 INFO:teuthology.orchestra.run.vm07.stdout:10 Mar 13:40:12 ntpd[16106]: Soliciting pool server 134.60.111.110 2026-03-10T13:40:12.862 INFO:teuthology.orchestra.run.vm08.stdout:10 Mar 13:40:12 ntpd[16109]: Soliciting pool server 91.189.91.157 2026-03-10T13:40:12.862 INFO:teuthology.orchestra.run.vm08.stdout:10 Mar 13:40:12 ntpd[16109]: Soliciting pool server 195.201.125.53 2026-03-10T13:40:12.862 INFO:teuthology.orchestra.run.vm08.stdout:10 Mar 13:40:12 ntpd[16109]: Soliciting pool server 134.60.1.30 2026-03-10T13:40:13.862 INFO:teuthology.orchestra.run.vm08.stdout:10 Mar 13:40:13 ntpd[16109]: Soliciting pool server 185.125.190.57 2026-03-10T13:40:13.862 INFO:teuthology.orchestra.run.vm08.stdout:10 Mar 13:40:13 ntpd[16109]: Soliciting pool server 178.254.28.54 2026-03-10T13:40:13.862 INFO:teuthology.orchestra.run.vm08.stdout:10 Mar 13:40:13 ntpd[16109]: Soliciting pool server 2003:a:843:7c00::1 2026-03-10T13:40:14.853 INFO:teuthology.orchestra.run.vm00.stdout:10 Mar 13:40:14 ntpd[16119]: ntpd: time slew +0.002695 s 2026-03-10T13:40:14.853 INFO:teuthology.orchestra.run.vm00.stdout:ntpd: time slew +0.002695s 2026-03-10T13:40:14.855 INFO:teuthology.orchestra.run.vm07.stdout:10 Mar 13:40:14 ntpd[16106]: ntpd: time slew -0.002383 s 2026-03-10T13:40:14.855 INFO:teuthology.orchestra.run.vm07.stdout:ntpd: time slew -0.002383s 2026-03-10T13:40:14.874 INFO:teuthology.orchestra.run.vm00.stdout: remote refid st t when poll reach delay offset jitter 2026-03-10T13:40:14.874 INFO:teuthology.orchestra.run.vm00.stdout:============================================================================== 2026-03-10T13:40:14.874 INFO:teuthology.orchestra.run.vm00.stdout: 0.ubuntu.pool.n .POOL. 16 p - 64 0 0.000 +0.000 0.000 2026-03-10T13:40:14.874 INFO:teuthology.orchestra.run.vm00.stdout: 1.ubuntu.pool.n .POOL. 16 p - 64 0 0.000 +0.000 0.000 2026-03-10T13:40:14.874 INFO:teuthology.orchestra.run.vm00.stdout: 2.ubuntu.pool.n .POOL. 16 p - 64 0 0.000 +0.000 0.000 2026-03-10T13:40:14.874 INFO:teuthology.orchestra.run.vm00.stdout: 3.ubuntu.pool.n .POOL. 16 p - 64 0 0.000 +0.000 0.000 2026-03-10T13:40:14.874 INFO:teuthology.orchestra.run.vm00.stdout: ntp.ubuntu.com .POOL. 16 p - 64 0 0.000 +0.000 0.000 2026-03-10T13:40:14.878 INFO:teuthology.orchestra.run.vm07.stdout: remote refid st t when poll reach delay offset jitter 2026-03-10T13:40:14.878 INFO:teuthology.orchestra.run.vm07.stdout:============================================================================== 2026-03-10T13:40:14.878 INFO:teuthology.orchestra.run.vm07.stdout: 0.ubuntu.pool.n .POOL. 16 p - 64 0 0.000 +0.000 0.000 2026-03-10T13:40:14.878 INFO:teuthology.orchestra.run.vm07.stdout: 1.ubuntu.pool.n .POOL. 16 p - 64 0 0.000 +0.000 0.000 2026-03-10T13:40:14.878 INFO:teuthology.orchestra.run.vm07.stdout: 2.ubuntu.pool.n .POOL. 16 p - 64 0 0.000 +0.000 0.000 2026-03-10T13:40:14.878 INFO:teuthology.orchestra.run.vm07.stdout: 3.ubuntu.pool.n .POOL. 16 p - 64 0 0.000 +0.000 0.000 2026-03-10T13:40:14.878 INFO:teuthology.orchestra.run.vm07.stdout: ntp.ubuntu.com .POOL. 16 p - 64 0 0.000 +0.000 0.000 2026-03-10T13:40:14.885 INFO:teuthology.orchestra.run.vm08.stdout:10 Mar 13:40:14 ntpd[16109]: ntpd: time slew +0.001092 s 2026-03-10T13:40:14.885 INFO:teuthology.orchestra.run.vm08.stdout:ntpd: time slew +0.001092s 2026-03-10T13:40:14.904 INFO:teuthology.orchestra.run.vm08.stdout: remote refid st t when poll reach delay offset jitter 2026-03-10T13:40:14.904 INFO:teuthology.orchestra.run.vm08.stdout:============================================================================== 2026-03-10T13:40:14.904 INFO:teuthology.orchestra.run.vm08.stdout: 0.ubuntu.pool.n .POOL. 16 p - 64 0 0.000 +0.000 0.000 2026-03-10T13:40:14.904 INFO:teuthology.orchestra.run.vm08.stdout: 1.ubuntu.pool.n .POOL. 16 p - 64 0 0.000 +0.000 0.000 2026-03-10T13:40:14.904 INFO:teuthology.orchestra.run.vm08.stdout: 2.ubuntu.pool.n .POOL. 16 p - 64 0 0.000 +0.000 0.000 2026-03-10T13:40:14.904 INFO:teuthology.orchestra.run.vm08.stdout: 3.ubuntu.pool.n .POOL. 16 p - 64 0 0.000 +0.000 0.000 2026-03-10T13:40:14.904 INFO:teuthology.orchestra.run.vm08.stdout: ntp.ubuntu.com .POOL. 16 p - 64 0 0.000 +0.000 0.000 2026-03-10T13:40:14.904 INFO:teuthology.run_tasks:Running task install... 2026-03-10T13:40:14.906 DEBUG:teuthology.task.install:project ceph 2026-03-10T13:40:14.906 DEBUG:teuthology.task.install:INSTALL overrides: {'ceph': {'flavor': 'default', 'sha1': 'e911bdebe5c8faa3800735d1568fcdca65db60df'}, 'extra_system_packages': {'deb': ['python3-xmltodict', 'python3-jmespath'], 'rpm': ['bzip2', 'perl-Test-Harness', 'python3-xmltodict', 'python3-jmespath']}} 2026-03-10T13:40:14.906 DEBUG:teuthology.task.install:config {'flavor': 'default', 'sha1': 'e911bdebe5c8faa3800735d1568fcdca65db60df', 'extra_system_packages': {'deb': ['python3-xmltodict', 'python3-jmespath'], 'rpm': ['bzip2', 'perl-Test-Harness', 'python3-xmltodict', 'python3-jmespath']}} 2026-03-10T13:40:14.906 INFO:teuthology.task.install:Using flavor: default 2026-03-10T13:40:14.909 DEBUG:teuthology.task.install:Package list is: {'deb': ['ceph', 'cephadm', 'ceph-mds', 'ceph-mgr', 'ceph-common', 'ceph-fuse', 'ceph-test', 'ceph-volume', 'radosgw', 'python3-rados', 'python3-rgw', 'python3-cephfs', 'python3-rbd', 'libcephfs2', 'libcephfs-dev', 'librados2', 'librbd1', 'rbd-fuse'], 'rpm': ['ceph-radosgw', 'ceph-test', 'ceph', 'ceph-base', 'cephadm', 'ceph-immutable-object-cache', 'ceph-mgr', 'ceph-mgr-dashboard', 'ceph-mgr-diskprediction-local', 'ceph-mgr-rook', 'ceph-mgr-cephadm', 'ceph-fuse', 'ceph-volume', 'librados-devel', 'libcephfs2', 'libcephfs-devel', 'librados2', 'librbd1', 'python3-rados', 'python3-rgw', 'python3-cephfs', 'python3-rbd', 'rbd-fuse', 'rbd-mirror', 'rbd-nbd']} 2026-03-10T13:40:14.909 INFO:teuthology.task.install:extra packages: [] 2026-03-10T13:40:14.909 DEBUG:teuthology.orchestra.run.vm00:> sudo apt-key list | grep Ceph 2026-03-10T13:40:14.909 DEBUG:teuthology.orchestra.run.vm07:> sudo apt-key list | grep Ceph 2026-03-10T13:40:14.909 DEBUG:teuthology.orchestra.run.vm08:> sudo apt-key list | grep Ceph 2026-03-10T13:40:14.953 INFO:teuthology.orchestra.run.vm00.stderr:Warning: apt-key is deprecated. Manage keyring files in trusted.gpg.d instead (see apt-key(8)). 2026-03-10T13:40:14.961 INFO:teuthology.orchestra.run.vm07.stderr:Warning: apt-key is deprecated. Manage keyring files in trusted.gpg.d instead (see apt-key(8)). 2026-03-10T13:40:14.971 INFO:teuthology.orchestra.run.vm00.stdout:uid [ unknown] Ceph automated package build (Ceph automated package build) 2026-03-10T13:40:14.971 INFO:teuthology.orchestra.run.vm00.stdout:uid [ unknown] Ceph.com (release key) 2026-03-10T13:40:14.971 INFO:teuthology.task.install.deb:Installing packages: ceph, cephadm, ceph-mds, ceph-mgr, ceph-common, ceph-fuse, ceph-test, ceph-volume, radosgw, python3-rados, python3-rgw, python3-cephfs, python3-rbd, libcephfs2, libcephfs-dev, librados2, librbd1, rbd-fuse on remote deb x86_64 2026-03-10T13:40:14.971 INFO:teuthology.task.install.deb:Installing system (non-project) packages: python3-xmltodict, python3-jmespath on remote deb x86_64 2026-03-10T13:40:14.971 DEBUG:teuthology.packaging:Querying https://shaman.ceph.com/api/search?status=ready&project=ceph&flavor=default&distros=ubuntu%2F22.04%2Fx86_64&sha1=e911bdebe5c8faa3800735d1568fcdca65db60df 2026-03-10T13:40:15.033 INFO:teuthology.orchestra.run.vm08.stderr:Warning: apt-key is deprecated. Manage keyring files in trusted.gpg.d instead (see apt-key(8)). 2026-03-10T13:40:15.033 INFO:teuthology.orchestra.run.vm07.stdout:uid [ unknown] Ceph automated package build (Ceph automated package build) 2026-03-10T13:40:15.033 INFO:teuthology.orchestra.run.vm07.stdout:uid [ unknown] Ceph.com (release key) 2026-03-10T13:40:15.033 INFO:teuthology.orchestra.run.vm08.stdout:uid [ unknown] Ceph automated package build (Ceph automated package build) 2026-03-10T13:40:15.034 INFO:teuthology.orchestra.run.vm08.stdout:uid [ unknown] Ceph.com (release key) 2026-03-10T13:40:15.034 INFO:teuthology.task.install.deb:Installing packages: ceph, cephadm, ceph-mds, ceph-mgr, ceph-common, ceph-fuse, ceph-test, ceph-volume, radosgw, python3-rados, python3-rgw, python3-cephfs, python3-rbd, libcephfs2, libcephfs-dev, librados2, librbd1, rbd-fuse on remote deb x86_64 2026-03-10T13:40:15.034 INFO:teuthology.task.install.deb:Installing system (non-project) packages: python3-xmltodict, python3-jmespath on remote deb x86_64 2026-03-10T13:40:15.034 DEBUG:teuthology.packaging:Querying https://shaman.ceph.com/api/search?status=ready&project=ceph&flavor=default&distros=ubuntu%2F22.04%2Fx86_64&sha1=e911bdebe5c8faa3800735d1568fcdca65db60df 2026-03-10T13:40:15.034 INFO:teuthology.task.install.deb:Installing packages: ceph, cephadm, ceph-mds, ceph-mgr, ceph-common, ceph-fuse, ceph-test, ceph-volume, radosgw, python3-rados, python3-rgw, python3-cephfs, python3-rbd, libcephfs2, libcephfs-dev, librados2, librbd1, rbd-fuse on remote deb x86_64 2026-03-10T13:40:15.034 INFO:teuthology.task.install.deb:Installing system (non-project) packages: python3-xmltodict, python3-jmespath on remote deb x86_64 2026-03-10T13:40:15.034 DEBUG:teuthology.packaging:Querying https://shaman.ceph.com/api/search?status=ready&project=ceph&flavor=default&distros=ubuntu%2F22.04%2Fx86_64&sha1=e911bdebe5c8faa3800735d1568fcdca65db60df 2026-03-10T13:40:15.614 INFO:teuthology.task.install.deb:Pulling from https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default/ 2026-03-10T13:40:15.614 INFO:teuthology.task.install.deb:Package version is 19.2.3-678-ge911bdeb-1jammy 2026-03-10T13:40:15.676 INFO:teuthology.task.install.deb:Pulling from https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default/ 2026-03-10T13:40:15.677 INFO:teuthology.task.install.deb:Package version is 19.2.3-678-ge911bdeb-1jammy 2026-03-10T13:40:15.737 INFO:teuthology.task.install.deb:Pulling from https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default/ 2026-03-10T13:40:15.737 INFO:teuthology.task.install.deb:Package version is 19.2.3-678-ge911bdeb-1jammy 2026-03-10T13:40:16.150 DEBUG:teuthology.orchestra.run.vm00:> set -ex 2026-03-10T13:40:16.151 DEBUG:teuthology.orchestra.run.vm00:> sudo dd of=/etc/apt/sources.list.d/ceph.list 2026-03-10T13:40:16.159 DEBUG:teuthology.orchestra.run.vm00:> sudo apt-get update 2026-03-10T13:40:16.207 DEBUG:teuthology.orchestra.run.vm07:> set -ex 2026-03-10T13:40:16.207 DEBUG:teuthology.orchestra.run.vm07:> sudo dd of=/etc/apt/sources.list.d/ceph.list 2026-03-10T13:40:16.215 DEBUG:teuthology.orchestra.run.vm07:> sudo apt-get update 2026-03-10T13:40:16.267 DEBUG:teuthology.orchestra.run.vm08:> set -ex 2026-03-10T13:40:16.267 DEBUG:teuthology.orchestra.run.vm08:> sudo dd of=/etc/apt/sources.list.d/ceph.list 2026-03-10T13:40:16.276 DEBUG:teuthology.orchestra.run.vm08:> sudo apt-get update 2026-03-10T13:40:16.350 INFO:teuthology.orchestra.run.vm00.stdout:Hit:1 https://archive.ubuntu.com/ubuntu jammy InRelease 2026-03-10T13:40:16.353 INFO:teuthology.orchestra.run.vm00.stdout:Hit:2 https://archive.ubuntu.com/ubuntu jammy-updates InRelease 2026-03-10T13:40:16.362 INFO:teuthology.orchestra.run.vm00.stdout:Hit:3 https://archive.ubuntu.com/ubuntu jammy-backports InRelease 2026-03-10T13:40:16.405 INFO:teuthology.orchestra.run.vm07.stdout:Hit:1 https://archive.ubuntu.com/ubuntu jammy InRelease 2026-03-10T13:40:16.410 INFO:teuthology.orchestra.run.vm07.stdout:Hit:2 https://archive.ubuntu.com/ubuntu jammy-updates InRelease 2026-03-10T13:40:16.419 INFO:teuthology.orchestra.run.vm07.stdout:Hit:3 https://archive.ubuntu.com/ubuntu jammy-backports InRelease 2026-03-10T13:40:16.577 INFO:teuthology.orchestra.run.vm08.stdout:Get:1 https://security.ubuntu.com/ubuntu jammy-security InRelease [129 kB] 2026-03-10T13:40:16.586 INFO:teuthology.orchestra.run.vm08.stdout:Hit:2 https://archive.ubuntu.com/ubuntu jammy InRelease 2026-03-10T13:40:16.622 INFO:teuthology.orchestra.run.vm08.stdout:Get:3 https://archive.ubuntu.com/ubuntu jammy-updates InRelease [128 kB] 2026-03-10T13:40:16.710 INFO:teuthology.orchestra.run.vm00.stdout:Hit:4 https://security.ubuntu.com/ubuntu jammy-security InRelease 2026-03-10T13:40:16.766 INFO:teuthology.orchestra.run.vm08.stdout:Get:4 https://archive.ubuntu.com/ubuntu jammy-backports InRelease [127 kB] 2026-03-10T13:40:16.773 INFO:teuthology.orchestra.run.vm07.stdout:Hit:4 https://security.ubuntu.com/ubuntu jammy-security InRelease 2026-03-10T13:40:16.830 INFO:teuthology.orchestra.run.vm08.stdout:Get:5 https://archive.ubuntu.com/ubuntu jammy-updates/main amd64 Packages [3285 kB] 2026-03-10T13:40:16.866 INFO:teuthology.orchestra.run.vm07.stdout:Ign:5 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy InRelease 2026-03-10T13:40:16.871 INFO:teuthology.orchestra.run.vm00.stdout:Ign:5 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy InRelease 2026-03-10T13:40:16.886 INFO:teuthology.orchestra.run.vm08.stdout:Ign:6 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy InRelease 2026-03-10T13:40:16.976 INFO:teuthology.orchestra.run.vm07.stdout:Get:6 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy Release [7662 B] 2026-03-10T13:40:16.984 INFO:teuthology.orchestra.run.vm00.stdout:Get:6 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy Release [7662 B] 2026-03-10T13:40:17.007 INFO:teuthology.orchestra.run.vm08.stdout:Get:7 https://archive.ubuntu.com/ubuntu jammy-updates/universe amd64 Packages [1256 kB] 2026-03-10T13:40:17.007 INFO:teuthology.orchestra.run.vm08.stdout:Get:8 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy Release [7662 B] 2026-03-10T13:40:17.087 INFO:teuthology.orchestra.run.vm07.stdout:Ign:7 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy Release.gpg 2026-03-10T13:40:17.096 INFO:teuthology.orchestra.run.vm00.stdout:Ign:7 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy Release.gpg 2026-03-10T13:40:17.123 INFO:teuthology.orchestra.run.vm08.stdout:Ign:9 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy Release.gpg 2026-03-10T13:40:17.198 INFO:teuthology.orchestra.run.vm07.stdout:Get:8 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 Packages [18.1 kB] 2026-03-10T13:40:17.209 INFO:teuthology.orchestra.run.vm00.stdout:Get:8 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 Packages [18.1 kB] 2026-03-10T13:40:17.240 INFO:teuthology.orchestra.run.vm08.stdout:Get:10 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 Packages [18.1 kB] 2026-03-10T13:40:17.278 INFO:teuthology.orchestra.run.vm07.stdout:Fetched 25.8 kB in 1s (28.5 kB/s) 2026-03-10T13:40:17.286 INFO:teuthology.orchestra.run.vm00.stdout:Fetched 25.8 kB in 1s (26.7 kB/s) 2026-03-10T13:40:17.374 INFO:teuthology.orchestra.run.vm08.stdout:Fetched 4951 kB in 1s (5344 kB/s) 2026-03-10T13:40:17.985 INFO:teuthology.orchestra.run.vm07.stdout:Reading package lists... 2026-03-10T13:40:18.000 DEBUG:teuthology.orchestra.run.vm07:> sudo DEBIAN_FRONTEND=noninteractive apt-get -y --force-yes -o Dpkg::Options::="--force-confdef" -o Dpkg::Options::="--force-confold" install ceph=19.2.3-678-ge911bdeb-1jammy cephadm=19.2.3-678-ge911bdeb-1jammy ceph-mds=19.2.3-678-ge911bdeb-1jammy ceph-mgr=19.2.3-678-ge911bdeb-1jammy ceph-common=19.2.3-678-ge911bdeb-1jammy ceph-fuse=19.2.3-678-ge911bdeb-1jammy ceph-test=19.2.3-678-ge911bdeb-1jammy ceph-volume=19.2.3-678-ge911bdeb-1jammy radosgw=19.2.3-678-ge911bdeb-1jammy python3-rados=19.2.3-678-ge911bdeb-1jammy python3-rgw=19.2.3-678-ge911bdeb-1jammy python3-cephfs=19.2.3-678-ge911bdeb-1jammy python3-rbd=19.2.3-678-ge911bdeb-1jammy libcephfs2=19.2.3-678-ge911bdeb-1jammy libcephfs-dev=19.2.3-678-ge911bdeb-1jammy librados2=19.2.3-678-ge911bdeb-1jammy librbd1=19.2.3-678-ge911bdeb-1jammy rbd-fuse=19.2.3-678-ge911bdeb-1jammy 2026-03-10T13:40:18.036 INFO:teuthology.orchestra.run.vm07.stdout:Reading package lists... 2026-03-10T13:40:18.052 INFO:teuthology.orchestra.run.vm00.stdout:Reading package lists... 2026-03-10T13:40:18.066 DEBUG:teuthology.orchestra.run.vm00:> sudo DEBIAN_FRONTEND=noninteractive apt-get -y --force-yes -o Dpkg::Options::="--force-confdef" -o Dpkg::Options::="--force-confold" install ceph=19.2.3-678-ge911bdeb-1jammy cephadm=19.2.3-678-ge911bdeb-1jammy ceph-mds=19.2.3-678-ge911bdeb-1jammy ceph-mgr=19.2.3-678-ge911bdeb-1jammy ceph-common=19.2.3-678-ge911bdeb-1jammy ceph-fuse=19.2.3-678-ge911bdeb-1jammy ceph-test=19.2.3-678-ge911bdeb-1jammy ceph-volume=19.2.3-678-ge911bdeb-1jammy radosgw=19.2.3-678-ge911bdeb-1jammy python3-rados=19.2.3-678-ge911bdeb-1jammy python3-rgw=19.2.3-678-ge911bdeb-1jammy python3-cephfs=19.2.3-678-ge911bdeb-1jammy python3-rbd=19.2.3-678-ge911bdeb-1jammy libcephfs2=19.2.3-678-ge911bdeb-1jammy libcephfs-dev=19.2.3-678-ge911bdeb-1jammy librados2=19.2.3-678-ge911bdeb-1jammy librbd1=19.2.3-678-ge911bdeb-1jammy rbd-fuse=19.2.3-678-ge911bdeb-1jammy 2026-03-10T13:40:18.106 INFO:teuthology.orchestra.run.vm00.stdout:Reading package lists... 2026-03-10T13:40:18.120 INFO:teuthology.orchestra.run.vm08.stdout:Reading package lists... 2026-03-10T13:40:18.133 DEBUG:teuthology.orchestra.run.vm08:> sudo DEBIAN_FRONTEND=noninteractive apt-get -y --force-yes -o Dpkg::Options::="--force-confdef" -o Dpkg::Options::="--force-confold" install ceph=19.2.3-678-ge911bdeb-1jammy cephadm=19.2.3-678-ge911bdeb-1jammy ceph-mds=19.2.3-678-ge911bdeb-1jammy ceph-mgr=19.2.3-678-ge911bdeb-1jammy ceph-common=19.2.3-678-ge911bdeb-1jammy ceph-fuse=19.2.3-678-ge911bdeb-1jammy ceph-test=19.2.3-678-ge911bdeb-1jammy ceph-volume=19.2.3-678-ge911bdeb-1jammy radosgw=19.2.3-678-ge911bdeb-1jammy python3-rados=19.2.3-678-ge911bdeb-1jammy python3-rgw=19.2.3-678-ge911bdeb-1jammy python3-cephfs=19.2.3-678-ge911bdeb-1jammy python3-rbd=19.2.3-678-ge911bdeb-1jammy libcephfs2=19.2.3-678-ge911bdeb-1jammy libcephfs-dev=19.2.3-678-ge911bdeb-1jammy librados2=19.2.3-678-ge911bdeb-1jammy librbd1=19.2.3-678-ge911bdeb-1jammy rbd-fuse=19.2.3-678-ge911bdeb-1jammy 2026-03-10T13:40:18.168 INFO:teuthology.orchestra.run.vm08.stdout:Reading package lists... 2026-03-10T13:40:18.260 INFO:teuthology.orchestra.run.vm07.stdout:Building dependency tree... 2026-03-10T13:40:18.261 INFO:teuthology.orchestra.run.vm07.stdout:Reading state information... 2026-03-10T13:40:18.306 INFO:teuthology.orchestra.run.vm00.stdout:Building dependency tree... 2026-03-10T13:40:18.306 INFO:teuthology.orchestra.run.vm00.stdout:Reading state information... 2026-03-10T13:40:18.405 INFO:teuthology.orchestra.run.vm08.stdout:Building dependency tree... 2026-03-10T13:40:18.405 INFO:teuthology.orchestra.run.vm08.stdout:Reading state information... 2026-03-10T13:40:18.500 INFO:teuthology.orchestra.run.vm00.stdout:The following packages were automatically installed and are no longer required: 2026-03-10T13:40:18.500 INFO:teuthology.orchestra.run.vm00.stdout: kpartx libboost-iostreams1.74.0 libboost-thread1.74.0 libpmemobj1 2026-03-10T13:40:18.501 INFO:teuthology.orchestra.run.vm00.stdout: libsgutils2-2 sg3-utils sg3-utils-udev 2026-03-10T13:40:18.501 INFO:teuthology.orchestra.run.vm00.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-10T13:40:18.502 INFO:teuthology.orchestra.run.vm00.stdout:The following additional packages will be installed: 2026-03-10T13:40:18.502 INFO:teuthology.orchestra.run.vm00.stdout: ceph-base ceph-mgr-cephadm ceph-mgr-dashboard ceph-mgr-diskprediction-local 2026-03-10T13:40:18.502 INFO:teuthology.orchestra.run.vm00.stdout: ceph-mgr-k8sevents ceph-mgr-modules-core ceph-mon ceph-osd jq 2026-03-10T13:40:18.502 INFO:teuthology.orchestra.run.vm00.stdout: libdouble-conversion3 libfuse2 libjq1 liblttng-ust1 liblua5.3-dev libnbd0 2026-03-10T13:40:18.503 INFO:teuthology.orchestra.run.vm00.stdout: liboath0 libonig5 libpcre2-16-0 libqt5core5a libqt5dbus5 libqt5network5 2026-03-10T13:40:18.503 INFO:teuthology.orchestra.run.vm00.stdout: libradosstriper1 librdkafka1 libreadline-dev librgw2 libsqlite3-mod-ceph 2026-03-10T13:40:18.504 INFO:teuthology.orchestra.run.vm00.stdout: libthrift-0.16.0 lua-any lua-sec lua-socket lua5.1 luarocks nvme-cli 2026-03-10T13:40:18.504 INFO:teuthology.orchestra.run.vm00.stdout: pkg-config python-asyncssh-doc python-pastedeploy-tpl python3-asyncssh 2026-03-10T13:40:18.504 INFO:teuthology.orchestra.run.vm00.stdout: python3-cachetools python3-ceph-argparse python3-ceph-common python3-cheroot 2026-03-10T13:40:18.504 INFO:teuthology.orchestra.run.vm00.stdout: python3-cherrypy3 python3-google-auth python3-iniconfig 2026-03-10T13:40:18.504 INFO:teuthology.orchestra.run.vm00.stdout: python3-jaraco.classes python3-jaraco.collections python3-jaraco.functools 2026-03-10T13:40:18.504 INFO:teuthology.orchestra.run.vm00.stdout: python3-jaraco.text python3-joblib python3-kubernetes python3-logutils 2026-03-10T13:40:18.504 INFO:teuthology.orchestra.run.vm00.stdout: python3-mako python3-natsort python3-paste python3-pastedeploy 2026-03-10T13:40:18.504 INFO:teuthology.orchestra.run.vm00.stdout: python3-pastescript python3-pecan python3-pluggy python3-portend 2026-03-10T13:40:18.504 INFO:teuthology.orchestra.run.vm00.stdout: python3-prettytable python3-psutil python3-py python3-pygments 2026-03-10T13:40:18.504 INFO:teuthology.orchestra.run.vm00.stdout: python3-pyinotify python3-pytest python3-repoze.lru 2026-03-10T13:40:18.504 INFO:teuthology.orchestra.run.vm00.stdout: python3-requests-oauthlib python3-routes python3-rsa python3-simplegeneric 2026-03-10T13:40:18.504 INFO:teuthology.orchestra.run.vm00.stdout: python3-simplejson python3-singledispatch python3-sklearn 2026-03-10T13:40:18.504 INFO:teuthology.orchestra.run.vm00.stdout: python3-sklearn-lib python3-tempita python3-tempora python3-threadpoolctl 2026-03-10T13:40:18.504 INFO:teuthology.orchestra.run.vm00.stdout: python3-toml python3-waitress python3-wcwidth python3-webob 2026-03-10T13:40:18.504 INFO:teuthology.orchestra.run.vm00.stdout: python3-websocket python3-webtest python3-werkzeug python3-zc.lockfile 2026-03-10T13:40:18.504 INFO:teuthology.orchestra.run.vm00.stdout: qttranslations5-l10n smartmontools socat unzip xmlstarlet zip 2026-03-10T13:40:18.505 INFO:teuthology.orchestra.run.vm00.stdout:Suggested packages: 2026-03-10T13:40:18.505 INFO:teuthology.orchestra.run.vm00.stdout: python3-influxdb readline-doc python3-beaker python-mako-doc 2026-03-10T13:40:18.505 INFO:teuthology.orchestra.run.vm00.stdout: python-natsort-doc httpd-wsgi libapache2-mod-python libapache2-mod-scgi 2026-03-10T13:40:18.505 INFO:teuthology.orchestra.run.vm00.stdout: libjs-mochikit python-pecan-doc python-psutil-doc subversion 2026-03-10T13:40:18.505 INFO:teuthology.orchestra.run.vm00.stdout: python-pygments-doc ttf-bitstream-vera python-pyinotify-doc python3-dap 2026-03-10T13:40:18.506 INFO:teuthology.orchestra.run.vm00.stdout: python-sklearn-doc ipython3 python-waitress-doc python-webob-doc 2026-03-10T13:40:18.506 INFO:teuthology.orchestra.run.vm00.stdout: python-webtest-doc python-werkzeug-doc python3-watchdog gsmartcontrol 2026-03-10T13:40:18.506 INFO:teuthology.orchestra.run.vm00.stdout: smart-notifier mailx | mailutils 2026-03-10T13:40:18.506 INFO:teuthology.orchestra.run.vm00.stdout:Recommended packages: 2026-03-10T13:40:18.506 INFO:teuthology.orchestra.run.vm00.stdout: btrfs-tools 2026-03-10T13:40:18.512 INFO:teuthology.orchestra.run.vm07.stdout:The following packages were automatically installed and are no longer required: 2026-03-10T13:40:18.512 INFO:teuthology.orchestra.run.vm07.stdout: kpartx libboost-iostreams1.74.0 libboost-thread1.74.0 libpmemobj1 2026-03-10T13:40:18.513 INFO:teuthology.orchestra.run.vm07.stdout: libsgutils2-2 sg3-utils sg3-utils-udev 2026-03-10T13:40:18.513 INFO:teuthology.orchestra.run.vm07.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-10T13:40:18.514 INFO:teuthology.orchestra.run.vm07.stdout:The following additional packages will be installed: 2026-03-10T13:40:18.514 INFO:teuthology.orchestra.run.vm07.stdout: ceph-base ceph-mgr-cephadm ceph-mgr-dashboard ceph-mgr-diskprediction-local 2026-03-10T13:40:18.514 INFO:teuthology.orchestra.run.vm07.stdout: ceph-mgr-k8sevents ceph-mgr-modules-core ceph-mon ceph-osd jq 2026-03-10T13:40:18.514 INFO:teuthology.orchestra.run.vm07.stdout: libdouble-conversion3 libfuse2 libjq1 liblttng-ust1 liblua5.3-dev libnbd0 2026-03-10T13:40:18.514 INFO:teuthology.orchestra.run.vm07.stdout: liboath0 libonig5 libpcre2-16-0 libqt5core5a libqt5dbus5 libqt5network5 2026-03-10T13:40:18.514 INFO:teuthology.orchestra.run.vm07.stdout: libradosstriper1 librdkafka1 libreadline-dev librgw2 libsqlite3-mod-ceph 2026-03-10T13:40:18.515 INFO:teuthology.orchestra.run.vm07.stdout: libthrift-0.16.0 lua-any lua-sec lua-socket lua5.1 luarocks nvme-cli 2026-03-10T13:40:18.515 INFO:teuthology.orchestra.run.vm07.stdout: pkg-config python-asyncssh-doc python-pastedeploy-tpl python3-asyncssh 2026-03-10T13:40:18.515 INFO:teuthology.orchestra.run.vm07.stdout: python3-cachetools python3-ceph-argparse python3-ceph-common python3-cheroot 2026-03-10T13:40:18.515 INFO:teuthology.orchestra.run.vm07.stdout: python3-cherrypy3 python3-google-auth python3-iniconfig 2026-03-10T13:40:18.515 INFO:teuthology.orchestra.run.vm07.stdout: python3-jaraco.classes python3-jaraco.collections python3-jaraco.functools 2026-03-10T13:40:18.515 INFO:teuthology.orchestra.run.vm07.stdout: python3-jaraco.text python3-joblib python3-kubernetes python3-logutils 2026-03-10T13:40:18.515 INFO:teuthology.orchestra.run.vm07.stdout: python3-mako python3-natsort python3-paste python3-pastedeploy 2026-03-10T13:40:18.515 INFO:teuthology.orchestra.run.vm07.stdout: python3-pastescript python3-pecan python3-pluggy python3-portend 2026-03-10T13:40:18.515 INFO:teuthology.orchestra.run.vm07.stdout: python3-prettytable python3-psutil python3-py python3-pygments 2026-03-10T13:40:18.515 INFO:teuthology.orchestra.run.vm07.stdout: python3-pyinotify python3-pytest python3-repoze.lru 2026-03-10T13:40:18.515 INFO:teuthology.orchestra.run.vm07.stdout: python3-requests-oauthlib python3-routes python3-rsa python3-simplegeneric 2026-03-10T13:40:18.515 INFO:teuthology.orchestra.run.vm07.stdout: python3-simplejson python3-singledispatch python3-sklearn 2026-03-10T13:40:18.515 INFO:teuthology.orchestra.run.vm07.stdout: python3-sklearn-lib python3-tempita python3-tempora python3-threadpoolctl 2026-03-10T13:40:18.515 INFO:teuthology.orchestra.run.vm07.stdout: python3-toml python3-waitress python3-wcwidth python3-webob 2026-03-10T13:40:18.515 INFO:teuthology.orchestra.run.vm07.stdout: python3-websocket python3-webtest python3-werkzeug python3-zc.lockfile 2026-03-10T13:40:18.515 INFO:teuthology.orchestra.run.vm07.stdout: qttranslations5-l10n smartmontools socat unzip xmlstarlet zip 2026-03-10T13:40:18.516 INFO:teuthology.orchestra.run.vm07.stdout:Suggested packages: 2026-03-10T13:40:18.516 INFO:teuthology.orchestra.run.vm07.stdout: python3-influxdb readline-doc python3-beaker python-mako-doc 2026-03-10T13:40:18.516 INFO:teuthology.orchestra.run.vm07.stdout: python-natsort-doc httpd-wsgi libapache2-mod-python libapache2-mod-scgi 2026-03-10T13:40:18.516 INFO:teuthology.orchestra.run.vm07.stdout: libjs-mochikit python-pecan-doc python-psutil-doc subversion 2026-03-10T13:40:18.516 INFO:teuthology.orchestra.run.vm07.stdout: python-pygments-doc ttf-bitstream-vera python-pyinotify-doc python3-dap 2026-03-10T13:40:18.516 INFO:teuthology.orchestra.run.vm07.stdout: python-sklearn-doc ipython3 python-waitress-doc python-webob-doc 2026-03-10T13:40:18.516 INFO:teuthology.orchestra.run.vm07.stdout: python-webtest-doc python-werkzeug-doc python3-watchdog gsmartcontrol 2026-03-10T13:40:18.516 INFO:teuthology.orchestra.run.vm07.stdout: smart-notifier mailx | mailutils 2026-03-10T13:40:18.516 INFO:teuthology.orchestra.run.vm07.stdout:Recommended packages: 2026-03-10T13:40:18.516 INFO:teuthology.orchestra.run.vm07.stdout: btrfs-tools 2026-03-10T13:40:18.551 INFO:teuthology.orchestra.run.vm00.stdout:The following NEW packages will be installed: 2026-03-10T13:40:18.551 INFO:teuthology.orchestra.run.vm00.stdout: ceph ceph-base ceph-common ceph-fuse ceph-mds ceph-mgr ceph-mgr-cephadm 2026-03-10T13:40:18.551 INFO:teuthology.orchestra.run.vm00.stdout: ceph-mgr-dashboard ceph-mgr-diskprediction-local ceph-mgr-k8sevents 2026-03-10T13:40:18.552 INFO:teuthology.orchestra.run.vm00.stdout: ceph-mgr-modules-core ceph-mon ceph-osd ceph-test ceph-volume cephadm jq 2026-03-10T13:40:18.552 INFO:teuthology.orchestra.run.vm00.stdout: libcephfs-dev libcephfs2 libdouble-conversion3 libfuse2 libjq1 liblttng-ust1 2026-03-10T13:40:18.552 INFO:teuthology.orchestra.run.vm00.stdout: liblua5.3-dev libnbd0 liboath0 libonig5 libpcre2-16-0 libqt5core5a 2026-03-10T13:40:18.552 INFO:teuthology.orchestra.run.vm00.stdout: libqt5dbus5 libqt5network5 libradosstriper1 librdkafka1 libreadline-dev 2026-03-10T13:40:18.552 INFO:teuthology.orchestra.run.vm00.stdout: librgw2 libsqlite3-mod-ceph libthrift-0.16.0 lua-any lua-sec lua-socket 2026-03-10T13:40:18.553 INFO:teuthology.orchestra.run.vm00.stdout: lua5.1 luarocks nvme-cli pkg-config python-asyncssh-doc 2026-03-10T13:40:18.553 INFO:teuthology.orchestra.run.vm00.stdout: python-pastedeploy-tpl python3-asyncssh python3-cachetools 2026-03-10T13:40:18.553 INFO:teuthology.orchestra.run.vm00.stdout: python3-ceph-argparse python3-ceph-common python3-cephfs python3-cheroot 2026-03-10T13:40:18.553 INFO:teuthology.orchestra.run.vm00.stdout: python3-cherrypy3 python3-google-auth python3-iniconfig 2026-03-10T13:40:18.553 INFO:teuthology.orchestra.run.vm00.stdout: python3-jaraco.classes python3-jaraco.collections python3-jaraco.functools 2026-03-10T13:40:18.553 INFO:teuthology.orchestra.run.vm00.stdout: python3-jaraco.text python3-joblib python3-kubernetes python3-logutils 2026-03-10T13:40:18.553 INFO:teuthology.orchestra.run.vm00.stdout: python3-mako python3-natsort python3-paste python3-pastedeploy 2026-03-10T13:40:18.553 INFO:teuthology.orchestra.run.vm00.stdout: python3-pastescript python3-pecan python3-pluggy python3-portend 2026-03-10T13:40:18.553 INFO:teuthology.orchestra.run.vm00.stdout: python3-prettytable python3-psutil python3-py python3-pygments 2026-03-10T13:40:18.553 INFO:teuthology.orchestra.run.vm00.stdout: python3-pyinotify python3-pytest python3-rados python3-rbd 2026-03-10T13:40:18.553 INFO:teuthology.orchestra.run.vm00.stdout: python3-repoze.lru python3-requests-oauthlib python3-rgw python3-routes 2026-03-10T13:40:18.553 INFO:teuthology.orchestra.run.vm00.stdout: python3-rsa python3-simplegeneric python3-simplejson python3-singledispatch 2026-03-10T13:40:18.553 INFO:teuthology.orchestra.run.vm00.stdout: python3-sklearn python3-sklearn-lib python3-tempita python3-tempora 2026-03-10T13:40:18.553 INFO:teuthology.orchestra.run.vm00.stdout: python3-threadpoolctl python3-toml python3-waitress python3-wcwidth 2026-03-10T13:40:18.553 INFO:teuthology.orchestra.run.vm00.stdout: python3-webob python3-websocket python3-webtest python3-werkzeug 2026-03-10T13:40:18.553 INFO:teuthology.orchestra.run.vm00.stdout: python3-zc.lockfile qttranslations5-l10n radosgw rbd-fuse smartmontools 2026-03-10T13:40:18.553 INFO:teuthology.orchestra.run.vm00.stdout: socat unzip xmlstarlet zip 2026-03-10T13:40:18.554 INFO:teuthology.orchestra.run.vm00.stdout:The following packages will be upgraded: 2026-03-10T13:40:18.555 INFO:teuthology.orchestra.run.vm00.stdout: librados2 librbd1 2026-03-10T13:40:18.559 INFO:teuthology.orchestra.run.vm07.stdout:The following NEW packages will be installed: 2026-03-10T13:40:18.559 INFO:teuthology.orchestra.run.vm07.stdout: ceph ceph-base ceph-common ceph-fuse ceph-mds ceph-mgr ceph-mgr-cephadm 2026-03-10T13:40:18.559 INFO:teuthology.orchestra.run.vm07.stdout: ceph-mgr-dashboard ceph-mgr-diskprediction-local ceph-mgr-k8sevents 2026-03-10T13:40:18.559 INFO:teuthology.orchestra.run.vm07.stdout: ceph-mgr-modules-core ceph-mon ceph-osd ceph-test ceph-volume cephadm jq 2026-03-10T13:40:18.559 INFO:teuthology.orchestra.run.vm07.stdout: libcephfs-dev libcephfs2 libdouble-conversion3 libfuse2 libjq1 liblttng-ust1 2026-03-10T13:40:18.559 INFO:teuthology.orchestra.run.vm07.stdout: liblua5.3-dev libnbd0 liboath0 libonig5 libpcre2-16-0 libqt5core5a 2026-03-10T13:40:18.559 INFO:teuthology.orchestra.run.vm07.stdout: libqt5dbus5 libqt5network5 libradosstriper1 librdkafka1 libreadline-dev 2026-03-10T13:40:18.559 INFO:teuthology.orchestra.run.vm07.stdout: librgw2 libsqlite3-mod-ceph libthrift-0.16.0 lua-any lua-sec lua-socket 2026-03-10T13:40:18.559 INFO:teuthology.orchestra.run.vm07.stdout: lua5.1 luarocks nvme-cli pkg-config python-asyncssh-doc 2026-03-10T13:40:18.560 INFO:teuthology.orchestra.run.vm07.stdout: python-pastedeploy-tpl python3-asyncssh python3-cachetools 2026-03-10T13:40:18.560 INFO:teuthology.orchestra.run.vm07.stdout: python3-ceph-argparse python3-ceph-common python3-cephfs python3-cheroot 2026-03-10T13:40:18.560 INFO:teuthology.orchestra.run.vm07.stdout: python3-cherrypy3 python3-google-auth python3-iniconfig 2026-03-10T13:40:18.560 INFO:teuthology.orchestra.run.vm07.stdout: python3-jaraco.classes python3-jaraco.collections python3-jaraco.functools 2026-03-10T13:40:18.560 INFO:teuthology.orchestra.run.vm07.stdout: python3-jaraco.text python3-joblib python3-kubernetes python3-logutils 2026-03-10T13:40:18.560 INFO:teuthology.orchestra.run.vm07.stdout: python3-mako python3-natsort python3-paste python3-pastedeploy 2026-03-10T13:40:18.560 INFO:teuthology.orchestra.run.vm07.stdout: python3-pastescript python3-pecan python3-pluggy python3-portend 2026-03-10T13:40:18.560 INFO:teuthology.orchestra.run.vm07.stdout: python3-prettytable python3-psutil python3-py python3-pygments 2026-03-10T13:40:18.560 INFO:teuthology.orchestra.run.vm07.stdout: python3-pyinotify python3-pytest python3-rados python3-rbd 2026-03-10T13:40:18.560 INFO:teuthology.orchestra.run.vm07.stdout: python3-repoze.lru python3-requests-oauthlib python3-rgw python3-routes 2026-03-10T13:40:18.560 INFO:teuthology.orchestra.run.vm07.stdout: python3-rsa python3-simplegeneric python3-simplejson python3-singledispatch 2026-03-10T13:40:18.560 INFO:teuthology.orchestra.run.vm07.stdout: python3-sklearn python3-sklearn-lib python3-tempita python3-tempora 2026-03-10T13:40:18.560 INFO:teuthology.orchestra.run.vm07.stdout: python3-threadpoolctl python3-toml python3-waitress python3-wcwidth 2026-03-10T13:40:18.560 INFO:teuthology.orchestra.run.vm07.stdout: python3-webob python3-websocket python3-webtest python3-werkzeug 2026-03-10T13:40:18.560 INFO:teuthology.orchestra.run.vm07.stdout: python3-zc.lockfile qttranslations5-l10n radosgw rbd-fuse smartmontools 2026-03-10T13:40:18.560 INFO:teuthology.orchestra.run.vm07.stdout: socat unzip xmlstarlet zip 2026-03-10T13:40:18.560 INFO:teuthology.orchestra.run.vm07.stdout:The following packages will be upgraded: 2026-03-10T13:40:18.561 INFO:teuthology.orchestra.run.vm07.stdout: librados2 librbd1 2026-03-10T13:40:18.659 INFO:teuthology.orchestra.run.vm07.stdout:2 upgraded, 107 newly installed, 0 to remove and 12 not upgraded. 2026-03-10T13:40:18.659 INFO:teuthology.orchestra.run.vm07.stdout:Need to get 178 MB of archives. 2026-03-10T13:40:18.659 INFO:teuthology.orchestra.run.vm07.stdout:After this operation, 782 MB of additional disk space will be used. 2026-03-10T13:40:18.659 INFO:teuthology.orchestra.run.vm07.stdout:Get:1 https://archive.ubuntu.com/ubuntu jammy/main amd64 liblttng-ust1 amd64 2.13.1-1ubuntu1 [190 kB] 2026-03-10T13:40:18.682 INFO:teuthology.orchestra.run.vm08.stdout:The following packages were automatically installed and are no longer required: 2026-03-10T13:40:18.682 INFO:teuthology.orchestra.run.vm08.stdout: kpartx libboost-iostreams1.74.0 libboost-thread1.74.0 libpmemobj1 2026-03-10T13:40:18.683 INFO:teuthology.orchestra.run.vm08.stdout: libsgutils2-2 sg3-utils sg3-utils-udev 2026-03-10T13:40:18.683 INFO:teuthology.orchestra.run.vm08.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-10T13:40:18.684 INFO:teuthology.orchestra.run.vm08.stdout:The following additional packages will be installed: 2026-03-10T13:40:18.684 INFO:teuthology.orchestra.run.vm08.stdout: ceph-base ceph-mgr-cephadm ceph-mgr-dashboard ceph-mgr-diskprediction-local 2026-03-10T13:40:18.684 INFO:teuthology.orchestra.run.vm08.stdout: ceph-mgr-k8sevents ceph-mgr-modules-core ceph-mon ceph-osd jq 2026-03-10T13:40:18.684 INFO:teuthology.orchestra.run.vm08.stdout: libdouble-conversion3 libfuse2 libjq1 liblttng-ust1 liblua5.3-dev libnbd0 2026-03-10T13:40:18.684 INFO:teuthology.orchestra.run.vm08.stdout: liboath0 libonig5 libpcre2-16-0 libqt5core5a libqt5dbus5 libqt5network5 2026-03-10T13:40:18.685 INFO:teuthology.orchestra.run.vm08.stdout: libradosstriper1 librdkafka1 libreadline-dev librgw2 libsqlite3-mod-ceph 2026-03-10T13:40:18.686 INFO:teuthology.orchestra.run.vm08.stdout: libthrift-0.16.0 lua-any lua-sec lua-socket lua5.1 luarocks nvme-cli 2026-03-10T13:40:18.686 INFO:teuthology.orchestra.run.vm08.stdout: pkg-config python-asyncssh-doc python-pastedeploy-tpl python3-asyncssh 2026-03-10T13:40:18.686 INFO:teuthology.orchestra.run.vm08.stdout: python3-cachetools python3-ceph-argparse python3-ceph-common python3-cheroot 2026-03-10T13:40:18.686 INFO:teuthology.orchestra.run.vm08.stdout: python3-cherrypy3 python3-google-auth python3-iniconfig 2026-03-10T13:40:18.686 INFO:teuthology.orchestra.run.vm08.stdout: python3-jaraco.classes python3-jaraco.collections python3-jaraco.functools 2026-03-10T13:40:18.686 INFO:teuthology.orchestra.run.vm08.stdout: python3-jaraco.text python3-joblib python3-kubernetes python3-logutils 2026-03-10T13:40:18.686 INFO:teuthology.orchestra.run.vm08.stdout: python3-mako python3-natsort python3-paste python3-pastedeploy 2026-03-10T13:40:18.686 INFO:teuthology.orchestra.run.vm08.stdout: python3-pastescript python3-pecan python3-pluggy python3-portend 2026-03-10T13:40:18.686 INFO:teuthology.orchestra.run.vm08.stdout: python3-prettytable python3-psutil python3-py python3-pygments 2026-03-10T13:40:18.686 INFO:teuthology.orchestra.run.vm08.stdout: python3-pyinotify python3-pytest python3-repoze.lru 2026-03-10T13:40:18.686 INFO:teuthology.orchestra.run.vm08.stdout: python3-requests-oauthlib python3-routes python3-rsa python3-simplegeneric 2026-03-10T13:40:18.686 INFO:teuthology.orchestra.run.vm08.stdout: python3-simplejson python3-singledispatch python3-sklearn 2026-03-10T13:40:18.686 INFO:teuthology.orchestra.run.vm08.stdout: python3-sklearn-lib python3-tempita python3-tempora python3-threadpoolctl 2026-03-10T13:40:18.686 INFO:teuthology.orchestra.run.vm08.stdout: python3-toml python3-waitress python3-wcwidth python3-webob 2026-03-10T13:40:18.686 INFO:teuthology.orchestra.run.vm08.stdout: python3-websocket python3-webtest python3-werkzeug python3-zc.lockfile 2026-03-10T13:40:18.686 INFO:teuthology.orchestra.run.vm08.stdout: qttranslations5-l10n smartmontools socat unzip xmlstarlet zip 2026-03-10T13:40:18.687 INFO:teuthology.orchestra.run.vm08.stdout:Suggested packages: 2026-03-10T13:40:18.687 INFO:teuthology.orchestra.run.vm08.stdout: python3-influxdb readline-doc python3-beaker python-mako-doc 2026-03-10T13:40:18.687 INFO:teuthology.orchestra.run.vm08.stdout: python-natsort-doc httpd-wsgi libapache2-mod-python libapache2-mod-scgi 2026-03-10T13:40:18.687 INFO:teuthology.orchestra.run.vm08.stdout: libjs-mochikit python-pecan-doc python-psutil-doc subversion 2026-03-10T13:40:18.687 INFO:teuthology.orchestra.run.vm08.stdout: python-pygments-doc ttf-bitstream-vera python-pyinotify-doc python3-dap 2026-03-10T13:40:18.687 INFO:teuthology.orchestra.run.vm08.stdout: python-sklearn-doc ipython3 python-waitress-doc python-webob-doc 2026-03-10T13:40:18.688 INFO:teuthology.orchestra.run.vm08.stdout: python-webtest-doc python-werkzeug-doc python3-watchdog gsmartcontrol 2026-03-10T13:40:18.688 INFO:teuthology.orchestra.run.vm08.stdout: smart-notifier mailx | mailutils 2026-03-10T13:40:18.688 INFO:teuthology.orchestra.run.vm08.stdout:Recommended packages: 2026-03-10T13:40:18.688 INFO:teuthology.orchestra.run.vm08.stdout: btrfs-tools 2026-03-10T13:40:18.694 INFO:teuthology.orchestra.run.vm07.stdout:Get:2 https://archive.ubuntu.com/ubuntu jammy/universe amd64 libdouble-conversion3 amd64 3.1.7-4 [39.0 kB] 2026-03-10T13:40:18.698 INFO:teuthology.orchestra.run.vm07.stdout:Get:3 https://archive.ubuntu.com/ubuntu jammy-updates/main amd64 libpcre2-16-0 amd64 10.39-3ubuntu0.1 [203 kB] 2026-03-10T13:40:18.701 INFO:teuthology.orchestra.run.vm07.stdout:Get:4 https://archive.ubuntu.com/ubuntu jammy-updates/universe amd64 libqt5core5a amd64 5.15.3+dfsg-2ubuntu0.2 [2006 kB] 2026-03-10T13:40:18.728 INFO:teuthology.orchestra.run.vm07.stdout:Get:5 https://archive.ubuntu.com/ubuntu jammy-updates/universe amd64 libqt5dbus5 amd64 5.15.3+dfsg-2ubuntu0.2 [222 kB] 2026-03-10T13:40:18.729 INFO:teuthology.orchestra.run.vm07.stdout:Get:6 https://archive.ubuntu.com/ubuntu jammy-updates/universe amd64 libqt5network5 amd64 5.15.3+dfsg-2ubuntu0.2 [731 kB] 2026-03-10T13:40:18.732 INFO:teuthology.orchestra.run.vm08.stdout:The following NEW packages will be installed: 2026-03-10T13:40:18.732 INFO:teuthology.orchestra.run.vm08.stdout: ceph ceph-base ceph-common ceph-fuse ceph-mds ceph-mgr ceph-mgr-cephadm 2026-03-10T13:40:18.732 INFO:teuthology.orchestra.run.vm08.stdout: ceph-mgr-dashboard ceph-mgr-diskprediction-local ceph-mgr-k8sevents 2026-03-10T13:40:18.733 INFO:teuthology.orchestra.run.vm08.stdout: ceph-mgr-modules-core ceph-mon ceph-osd ceph-test ceph-volume cephadm jq 2026-03-10T13:40:18.733 INFO:teuthology.orchestra.run.vm08.stdout: libcephfs-dev libcephfs2 libdouble-conversion3 libfuse2 libjq1 liblttng-ust1 2026-03-10T13:40:18.733 INFO:teuthology.orchestra.run.vm08.stdout: liblua5.3-dev libnbd0 liboath0 libonig5 libpcre2-16-0 libqt5core5a 2026-03-10T13:40:18.733 INFO:teuthology.orchestra.run.vm08.stdout: libqt5dbus5 libqt5network5 libradosstriper1 librdkafka1 libreadline-dev 2026-03-10T13:40:18.733 INFO:teuthology.orchestra.run.vm08.stdout: librgw2 libsqlite3-mod-ceph libthrift-0.16.0 lua-any lua-sec lua-socket 2026-03-10T13:40:18.733 INFO:teuthology.orchestra.run.vm08.stdout: lua5.1 luarocks nvme-cli pkg-config python-asyncssh-doc 2026-03-10T13:40:18.733 INFO:teuthology.orchestra.run.vm08.stdout: python-pastedeploy-tpl python3-asyncssh python3-cachetools 2026-03-10T13:40:18.733 INFO:teuthology.orchestra.run.vm08.stdout: python3-ceph-argparse python3-ceph-common python3-cephfs python3-cheroot 2026-03-10T13:40:18.733 INFO:teuthology.orchestra.run.vm08.stdout: python3-cherrypy3 python3-google-auth python3-iniconfig 2026-03-10T13:40:18.733 INFO:teuthology.orchestra.run.vm08.stdout: python3-jaraco.classes python3-jaraco.collections python3-jaraco.functools 2026-03-10T13:40:18.733 INFO:teuthology.orchestra.run.vm08.stdout: python3-jaraco.text python3-joblib python3-kubernetes python3-logutils 2026-03-10T13:40:18.733 INFO:teuthology.orchestra.run.vm08.stdout: python3-mako python3-natsort python3-paste python3-pastedeploy 2026-03-10T13:40:18.733 INFO:teuthology.orchestra.run.vm08.stdout: python3-pastescript python3-pecan python3-pluggy python3-portend 2026-03-10T13:40:18.733 INFO:teuthology.orchestra.run.vm08.stdout: python3-prettytable python3-psutil python3-py python3-pygments 2026-03-10T13:40:18.734 INFO:teuthology.orchestra.run.vm08.stdout: python3-pyinotify python3-pytest python3-rados python3-rbd 2026-03-10T13:40:18.734 INFO:teuthology.orchestra.run.vm08.stdout: python3-repoze.lru python3-requests-oauthlib python3-rgw python3-routes 2026-03-10T13:40:18.734 INFO:teuthology.orchestra.run.vm08.stdout: python3-rsa python3-simplegeneric python3-simplejson python3-singledispatch 2026-03-10T13:40:18.734 INFO:teuthology.orchestra.run.vm08.stdout: python3-sklearn python3-sklearn-lib python3-tempita python3-tempora 2026-03-10T13:40:18.734 INFO:teuthology.orchestra.run.vm08.stdout: python3-threadpoolctl python3-toml python3-waitress python3-wcwidth 2026-03-10T13:40:18.734 INFO:teuthology.orchestra.run.vm08.stdout: python3-webob python3-websocket python3-webtest python3-werkzeug 2026-03-10T13:40:18.734 INFO:teuthology.orchestra.run.vm08.stdout: python3-zc.lockfile qttranslations5-l10n radosgw rbd-fuse smartmontools 2026-03-10T13:40:18.734 INFO:teuthology.orchestra.run.vm08.stdout: socat unzip xmlstarlet zip 2026-03-10T13:40:18.734 INFO:teuthology.orchestra.run.vm08.stdout:The following packages will be upgraded: 2026-03-10T13:40:18.734 INFO:teuthology.orchestra.run.vm08.stdout: librados2 librbd1 2026-03-10T13:40:18.743 INFO:teuthology.orchestra.run.vm07.stdout:Get:7 https://archive.ubuntu.com/ubuntu jammy/universe amd64 libthrift-0.16.0 amd64 0.16.0-2 [267 kB] 2026-03-10T13:40:18.744 INFO:teuthology.orchestra.run.vm07.stdout:Get:8 https://archive.ubuntu.com/ubuntu jammy/universe amd64 libnbd0 amd64 1.10.5-1 [71.3 kB] 2026-03-10T13:40:18.745 INFO:teuthology.orchestra.run.vm07.stdout:Get:9 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-wcwidth all 0.2.5+dfsg1-1 [21.9 kB] 2026-03-10T13:40:18.745 INFO:teuthology.orchestra.run.vm07.stdout:Get:10 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-prettytable all 2.5.0-2 [31.3 kB] 2026-03-10T13:40:18.746 INFO:teuthology.orchestra.run.vm07.stdout:Get:11 https://archive.ubuntu.com/ubuntu jammy/universe amd64 librdkafka1 amd64 1.8.0-1build1 [633 kB] 2026-03-10T13:40:18.749 INFO:teuthology.orchestra.run.vm07.stdout:Get:12 https://archive.ubuntu.com/ubuntu jammy/main amd64 libreadline-dev amd64 8.1.2-1 [166 kB] 2026-03-10T13:40:18.749 INFO:teuthology.orchestra.run.vm07.stdout:Get:13 https://archive.ubuntu.com/ubuntu jammy/main amd64 liblua5.3-dev amd64 5.3.6-1build1 [167 kB] 2026-03-10T13:40:18.751 INFO:teuthology.orchestra.run.vm07.stdout:Get:14 https://archive.ubuntu.com/ubuntu jammy/universe amd64 lua5.1 amd64 5.1.5-8.1build4 [94.6 kB] 2026-03-10T13:40:18.751 INFO:teuthology.orchestra.run.vm07.stdout:Get:15 https://archive.ubuntu.com/ubuntu jammy/universe amd64 lua-any all 27ubuntu1 [5034 B] 2026-03-10T13:40:18.756 INFO:teuthology.orchestra.run.vm07.stdout:Get:16 https://archive.ubuntu.com/ubuntu jammy/main amd64 zip amd64 3.0-12build2 [176 kB] 2026-03-10T13:40:18.757 INFO:teuthology.orchestra.run.vm07.stdout:Get:17 https://archive.ubuntu.com/ubuntu jammy-updates/main amd64 unzip amd64 6.0-26ubuntu3.2 [175 kB] 2026-03-10T13:40:18.759 INFO:teuthology.orchestra.run.vm07.stdout:Get:18 https://archive.ubuntu.com/ubuntu jammy/universe amd64 luarocks all 3.8.0+dfsg1-1 [140 kB] 2026-03-10T13:40:18.760 INFO:teuthology.orchestra.run.vm07.stdout:Get:19 https://archive.ubuntu.com/ubuntu jammy-updates/main amd64 liboath0 amd64 2.6.7-3ubuntu0.1 [41.3 kB] 2026-03-10T13:40:18.760 INFO:teuthology.orchestra.run.vm07.stdout:Get:20 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-jaraco.functools all 3.4.0-2 [9030 B] 2026-03-10T13:40:18.763 INFO:teuthology.orchestra.run.vm07.stdout:Get:21 https://archive.ubuntu.com/ubuntu jammy-updates/main amd64 python3-cheroot all 8.5.2+ds1-1ubuntu3.1 [71.1 kB] 2026-03-10T13:40:18.763 INFO:teuthology.orchestra.run.vm07.stdout:Get:22 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-jaraco.classes all 3.2.1-3 [6452 B] 2026-03-10T13:40:18.764 INFO:teuthology.orchestra.run.vm07.stdout:Get:23 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-jaraco.text all 3.6.0-2 [8716 B] 2026-03-10T13:40:18.764 INFO:teuthology.orchestra.run.vm07.stdout:Get:24 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-jaraco.collections all 3.4.0-2 [11.4 kB] 2026-03-10T13:40:18.764 INFO:teuthology.orchestra.run.vm07.stdout:Get:25 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-tempora all 4.1.2-1 [14.8 kB] 2026-03-10T13:40:18.770 INFO:teuthology.orchestra.run.vm07.stdout:Get:26 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-portend all 3.0.0-1 [7240 B] 2026-03-10T13:40:18.771 INFO:teuthology.orchestra.run.vm07.stdout:Get:27 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-zc.lockfile all 2.0-1 [8980 B] 2026-03-10T13:40:18.771 INFO:teuthology.orchestra.run.vm07.stdout:Get:28 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-cherrypy3 all 18.6.1-4 [208 kB] 2026-03-10T13:40:18.772 INFO:teuthology.orchestra.run.vm07.stdout:Get:29 https://archive.ubuntu.com/ubuntu jammy/universe amd64 python3-natsort all 8.0.2-1 [35.3 kB] 2026-03-10T13:40:18.773 INFO:teuthology.orchestra.run.vm07.stdout:Get:30 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-logutils all 0.3.3-8 [17.6 kB] 2026-03-10T13:40:18.778 INFO:teuthology.orchestra.run.vm07.stdout:Get:31 https://archive.ubuntu.com/ubuntu jammy-updates/main amd64 python3-mako all 1.1.3+ds1-2ubuntu0.1 [60.5 kB] 2026-03-10T13:40:18.778 INFO:teuthology.orchestra.run.vm07.stdout:Get:32 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-simplegeneric all 0.8.1-3 [11.3 kB] 2026-03-10T13:40:18.779 INFO:teuthology.orchestra.run.vm07.stdout:Get:33 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-singledispatch all 3.4.0.3-3 [7320 B] 2026-03-10T13:40:18.779 INFO:teuthology.orchestra.run.vm07.stdout:Get:34 https://archive.ubuntu.com/ubuntu jammy-updates/main amd64 python3-webob all 1:1.8.6-1.1ubuntu0.1 [86.7 kB] 2026-03-10T13:40:18.780 INFO:teuthology.orchestra.run.vm07.stdout:Get:35 https://archive.ubuntu.com/ubuntu jammy-updates/main amd64 python3-waitress all 1.4.4-1.1ubuntu1.1 [47.0 kB] 2026-03-10T13:40:18.785 INFO:teuthology.orchestra.run.vm07.stdout:Get:36 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-tempita all 0.5.2-6ubuntu1 [15.1 kB] 2026-03-10T13:40:18.786 INFO:teuthology.orchestra.run.vm07.stdout:Get:37 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-paste all 3.5.0+dfsg1-1 [456 kB] 2026-03-10T13:40:18.789 INFO:teuthology.orchestra.run.vm07.stdout:Get:38 https://archive.ubuntu.com/ubuntu jammy/main amd64 python-pastedeploy-tpl all 2.1.1-1 [4892 B] 2026-03-10T13:40:18.789 INFO:teuthology.orchestra.run.vm07.stdout:Get:39 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-pastedeploy all 2.1.1-1 [26.6 kB] 2026-03-10T13:40:18.790 INFO:teuthology.orchestra.run.vm07.stdout:Get:40 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-webtest all 2.0.35-1 [28.5 kB] 2026-03-10T13:40:18.792 INFO:teuthology.orchestra.run.vm07.stdout:Get:41 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-pecan all 1.3.3-4ubuntu2 [87.3 kB] 2026-03-10T13:40:18.793 INFO:teuthology.orchestra.run.vm07.stdout:Get:42 https://archive.ubuntu.com/ubuntu jammy-updates/main amd64 python3-werkzeug all 2.0.2+dfsg1-1ubuntu0.22.04.3 [181 kB] 2026-03-10T13:40:18.795 INFO:teuthology.orchestra.run.vm07.stdout:Get:43 https://archive.ubuntu.com/ubuntu jammy/universe amd64 libfuse2 amd64 2.9.9-5ubuntu3 [90.3 kB] 2026-03-10T13:40:18.796 INFO:teuthology.orchestra.run.vm07.stdout:Get:44 https://archive.ubuntu.com/ubuntu jammy-updates/universe amd64 python3-asyncssh all 2.5.0-1ubuntu0.1 [189 kB] 2026-03-10T13:40:18.797 INFO:teuthology.orchestra.run.vm07.stdout:Get:45 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-repoze.lru all 0.7-2 [12.1 kB] 2026-03-10T13:40:18.800 INFO:teuthology.orchestra.run.vm07.stdout:Get:46 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-routes all 2.5.1-1ubuntu1 [89.0 kB] 2026-03-10T13:40:18.801 INFO:teuthology.orchestra.run.vm07.stdout:Get:47 https://archive.ubuntu.com/ubuntu jammy/universe amd64 python3-sklearn-lib amd64 0.23.2-5ubuntu6 [2058 kB] 2026-03-10T13:40:18.829 INFO:teuthology.orchestra.run.vm07.stdout:Get:48 https://archive.ubuntu.com/ubuntu jammy/universe amd64 python3-joblib all 0.17.0-4ubuntu1 [204 kB] 2026-03-10T13:40:18.830 INFO:teuthology.orchestra.run.vm07.stdout:Get:49 https://archive.ubuntu.com/ubuntu jammy/universe amd64 python3-threadpoolctl all 3.1.0-1 [21.3 kB] 2026-03-10T13:40:18.830 INFO:teuthology.orchestra.run.vm07.stdout:Get:50 https://archive.ubuntu.com/ubuntu jammy/universe amd64 python3-sklearn all 0.23.2-5ubuntu6 [1829 kB] 2026-03-10T13:40:18.843 INFO:teuthology.orchestra.run.vm07.stdout:Get:51 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-cachetools all 5.0.0-1 [9722 B] 2026-03-10T13:40:18.843 INFO:teuthology.orchestra.run.vm07.stdout:Get:52 https://archive.ubuntu.com/ubuntu jammy/universe amd64 python3-rsa all 4.8-1 [28.4 kB] 2026-03-10T13:40:18.843 INFO:teuthology.orchestra.run.vm07.stdout:Get:53 https://archive.ubuntu.com/ubuntu jammy/universe amd64 python3-google-auth all 1.5.1-3 [35.7 kB] 2026-03-10T13:40:18.844 INFO:teuthology.orchestra.run.vm07.stdout:Get:54 https://archive.ubuntu.com/ubuntu jammy/universe amd64 python3-requests-oauthlib all 1.3.0+ds-0.1 [18.7 kB] 2026-03-10T13:40:18.844 INFO:teuthology.orchestra.run.vm07.stdout:Get:55 https://archive.ubuntu.com/ubuntu jammy/universe amd64 python3-websocket all 1.2.3-1 [34.7 kB] 2026-03-10T13:40:18.844 INFO:teuthology.orchestra.run.vm07.stdout:Get:56 https://archive.ubuntu.com/ubuntu jammy/universe amd64 python3-kubernetes all 12.0.1-1ubuntu1 [353 kB] 2026-03-10T13:40:18.846 INFO:teuthology.orchestra.run.vm07.stdout:Get:57 https://archive.ubuntu.com/ubuntu jammy/main amd64 libonig5 amd64 6.9.7.1-2build1 [172 kB] 2026-03-10T13:40:18.847 INFO:teuthology.orchestra.run.vm07.stdout:Get:58 https://archive.ubuntu.com/ubuntu jammy-updates/main amd64 libjq1 amd64 1.6-2.1ubuntu3.1 [133 kB] 2026-03-10T13:40:18.848 INFO:teuthology.orchestra.run.vm07.stdout:Get:59 https://archive.ubuntu.com/ubuntu jammy-updates/main amd64 jq amd64 1.6-2.1ubuntu3.1 [52.5 kB] 2026-03-10T13:40:18.854 INFO:teuthology.orchestra.run.vm07.stdout:Get:60 https://archive.ubuntu.com/ubuntu jammy/main amd64 socat amd64 1.7.4.1-3ubuntu4 [349 kB] 2026-03-10T13:40:18.857 INFO:teuthology.orchestra.run.vm07.stdout:Get:61 https://archive.ubuntu.com/ubuntu jammy/universe amd64 xmlstarlet amd64 1.6.1-2.1 [265 kB] 2026-03-10T13:40:18.860 INFO:teuthology.orchestra.run.vm07.stdout:Get:62 https://archive.ubuntu.com/ubuntu jammy/universe amd64 lua-socket amd64 3.0~rc1+git+ac3201d-6 [78.9 kB] 2026-03-10T13:40:18.860 INFO:teuthology.orchestra.run.vm07.stdout:Get:63 https://archive.ubuntu.com/ubuntu jammy/universe amd64 lua-sec amd64 1.0.2-1 [37.6 kB] 2026-03-10T13:40:18.861 INFO:teuthology.orchestra.run.vm07.stdout:Get:64 https://archive.ubuntu.com/ubuntu jammy-updates/main amd64 nvme-cli amd64 1.16-3ubuntu0.3 [474 kB] 2026-03-10T13:40:18.872 INFO:teuthology.orchestra.run.vm07.stdout:Get:65 https://archive.ubuntu.com/ubuntu jammy/main amd64 pkg-config amd64 0.29.2-1ubuntu3 [48.2 kB] 2026-03-10T13:40:18.873 INFO:teuthology.orchestra.run.vm07.stdout:Get:66 https://archive.ubuntu.com/ubuntu jammy-updates/universe amd64 python-asyncssh-doc all 2.5.0-1ubuntu0.1 [309 kB] 2026-03-10T13:40:18.875 INFO:teuthology.orchestra.run.vm07.stdout:Get:67 https://archive.ubuntu.com/ubuntu jammy/universe amd64 python3-iniconfig all 1.1.1-2 [6024 B] 2026-03-10T13:40:18.876 INFO:teuthology.orchestra.run.vm07.stdout:Get:68 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-pastescript all 2.0.2-4 [54.6 kB] 2026-03-10T13:40:18.876 INFO:teuthology.orchestra.run.vm07.stdout:Get:69 https://archive.ubuntu.com/ubuntu jammy/universe amd64 python3-pluggy all 0.13.0-7.1 [19.0 kB] 2026-03-10T13:40:18.876 INFO:teuthology.orchestra.run.vm07.stdout:Get:70 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-psutil amd64 5.9.0-1build1 [158 kB] 2026-03-10T13:40:18.877 INFO:teuthology.orchestra.run.vm07.stdout:Get:71 https://archive.ubuntu.com/ubuntu jammy/universe amd64 python3-py all 1.10.0-1 [71.9 kB] 2026-03-10T13:40:18.878 INFO:teuthology.orchestra.run.vm07.stdout:Get:72 https://archive.ubuntu.com/ubuntu jammy-updates/main amd64 python3-pygments all 2.11.2+dfsg-2ubuntu0.1 [750 kB] 2026-03-10T13:40:18.884 INFO:teuthology.orchestra.run.vm07.stdout:Get:73 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-pyinotify all 0.9.6-1.3 [24.8 kB] 2026-03-10T13:40:18.884 INFO:teuthology.orchestra.run.vm07.stdout:Get:74 https://archive.ubuntu.com/ubuntu jammy/universe amd64 python3-toml all 0.10.2-1 [16.5 kB] 2026-03-10T13:40:18.891 INFO:teuthology.orchestra.run.vm07.stdout:Get:75 https://archive.ubuntu.com/ubuntu jammy/universe amd64 python3-pytest all 6.2.5-1ubuntu2 [214 kB] 2026-03-10T13:40:18.893 INFO:teuthology.orchestra.run.vm07.stdout:Get:76 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-simplejson amd64 3.17.6-1build1 [54.7 kB] 2026-03-10T13:40:18.893 INFO:teuthology.orchestra.run.vm07.stdout:Get:77 https://archive.ubuntu.com/ubuntu jammy/universe amd64 qttranslations5-l10n all 5.15.3-1 [1983 kB] 2026-03-10T13:40:18.910 INFO:teuthology.orchestra.run.vm07.stdout:Get:78 https://archive.ubuntu.com/ubuntu jammy-updates/main amd64 smartmontools amd64 7.2-1ubuntu0.1 [583 kB] 2026-03-10T13:40:19.046 INFO:teuthology.orchestra.run.vm00.stdout:2 upgraded, 107 newly installed, 0 to remove and 12 not upgraded. 2026-03-10T13:40:19.046 INFO:teuthology.orchestra.run.vm00.stdout:Need to get 178 MB of archives. 2026-03-10T13:40:19.046 INFO:teuthology.orchestra.run.vm00.stdout:After this operation, 782 MB of additional disk space will be used. 2026-03-10T13:40:19.046 INFO:teuthology.orchestra.run.vm00.stdout:Get:1 https://archive.ubuntu.com/ubuntu jammy/main amd64 liblttng-ust1 amd64 2.13.1-1ubuntu1 [190 kB] 2026-03-10T13:40:19.152 INFO:teuthology.orchestra.run.vm00.stdout:Get:2 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 librbd1 amd64 19.2.3-678-ge911bdeb-1jammy [3257 kB] 2026-03-10T13:40:19.163 INFO:teuthology.orchestra.run.vm07.stdout:Get:79 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 librbd1 amd64 19.2.3-678-ge911bdeb-1jammy [3257 kB] 2026-03-10T13:40:19.193 INFO:teuthology.orchestra.run.vm08.stdout:2 upgraded, 107 newly installed, 0 to remove and 12 not upgraded. 2026-03-10T13:40:19.193 INFO:teuthology.orchestra.run.vm08.stdout:Need to get 178 MB of archives. 2026-03-10T13:40:19.193 INFO:teuthology.orchestra.run.vm08.stdout:After this operation, 782 MB of additional disk space will be used. 2026-03-10T13:40:19.193 INFO:teuthology.orchestra.run.vm08.stdout:Get:1 https://archive.ubuntu.com/ubuntu jammy/main amd64 liblttng-ust1 amd64 2.13.1-1ubuntu1 [190 kB] 2026-03-10T13:40:19.363 INFO:teuthology.orchestra.run.vm08.stdout:Get:2 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 librbd1 amd64 19.2.3-678-ge911bdeb-1jammy [3257 kB] 2026-03-10T13:40:19.531 INFO:teuthology.orchestra.run.vm00.stdout:Get:3 https://archive.ubuntu.com/ubuntu jammy/universe amd64 libdouble-conversion3 amd64 3.1.7-4 [39.0 kB] 2026-03-10T13:40:19.547 INFO:teuthology.orchestra.run.vm00.stdout:Get:4 https://archive.ubuntu.com/ubuntu jammy-updates/main amd64 libpcre2-16-0 amd64 10.39-3ubuntu0.1 [203 kB] 2026-03-10T13:40:19.646 INFO:teuthology.orchestra.run.vm00.stdout:Get:5 https://archive.ubuntu.com/ubuntu jammy-updates/universe amd64 libqt5core5a amd64 5.15.3+dfsg-2ubuntu0.2 [2006 kB] 2026-03-10T13:40:19.664 INFO:teuthology.orchestra.run.vm08.stdout:Get:3 https://archive.ubuntu.com/ubuntu jammy/universe amd64 libdouble-conversion3 amd64 3.1.7-4 [39.0 kB] 2026-03-10T13:40:19.679 INFO:teuthology.orchestra.run.vm08.stdout:Get:4 https://archive.ubuntu.com/ubuntu jammy-updates/main amd64 libpcre2-16-0 amd64 10.39-3ubuntu0.1 [203 kB] 2026-03-10T13:40:19.774 INFO:teuthology.orchestra.run.vm08.stdout:Get:5 https://archive.ubuntu.com/ubuntu jammy-updates/universe amd64 libqt5core5a amd64 5.15.3+dfsg-2ubuntu0.2 [2006 kB] 2026-03-10T13:40:19.936 INFO:teuthology.orchestra.run.vm00.stdout:Get:6 https://archive.ubuntu.com/ubuntu jammy-updates/universe amd64 libqt5dbus5 amd64 5.15.3+dfsg-2ubuntu0.2 [222 kB] 2026-03-10T13:40:19.949 INFO:teuthology.orchestra.run.vm00.stdout:Get:7 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 librados2 amd64 19.2.3-678-ge911bdeb-1jammy [3597 kB] 2026-03-10T13:40:19.952 INFO:teuthology.orchestra.run.vm00.stdout:Get:8 https://archive.ubuntu.com/ubuntu jammy-updates/universe amd64 libqt5network5 amd64 5.15.3+dfsg-2ubuntu0.2 [731 kB] 2026-03-10T13:40:20.005 INFO:teuthology.orchestra.run.vm07.stdout:Get:80 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 librados2 amd64 19.2.3-678-ge911bdeb-1jammy [3597 kB] 2026-03-10T13:40:20.058 INFO:teuthology.orchestra.run.vm08.stdout:Get:6 https://archive.ubuntu.com/ubuntu jammy-updates/universe amd64 libqt5dbus5 amd64 5.15.3+dfsg-2ubuntu0.2 [222 kB] 2026-03-10T13:40:20.061 INFO:teuthology.orchestra.run.vm00.stdout:Get:9 https://archive.ubuntu.com/ubuntu jammy/universe amd64 libthrift-0.16.0 amd64 0.16.0-2 [267 kB] 2026-03-10T13:40:20.062 INFO:teuthology.orchestra.run.vm00.stdout:Get:10 https://archive.ubuntu.com/ubuntu jammy/universe amd64 libnbd0 amd64 1.10.5-1 [71.3 kB] 2026-03-10T13:40:20.063 INFO:teuthology.orchestra.run.vm00.stdout:Get:11 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-wcwidth all 0.2.5+dfsg1-1 [21.9 kB] 2026-03-10T13:40:20.063 INFO:teuthology.orchestra.run.vm00.stdout:Get:12 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-prettytable all 2.5.0-2 [31.3 kB] 2026-03-10T13:40:20.064 INFO:teuthology.orchestra.run.vm00.stdout:Get:13 https://archive.ubuntu.com/ubuntu jammy/universe amd64 librdkafka1 amd64 1.8.0-1build1 [633 kB] 2026-03-10T13:40:20.066 INFO:teuthology.orchestra.run.vm00.stdout:Get:14 https://archive.ubuntu.com/ubuntu jammy/main amd64 libreadline-dev amd64 8.1.2-1 [166 kB] 2026-03-10T13:40:20.067 INFO:teuthology.orchestra.run.vm00.stdout:Get:15 https://archive.ubuntu.com/ubuntu jammy/main amd64 liblua5.3-dev amd64 5.3.6-1build1 [167 kB] 2026-03-10T13:40:20.068 INFO:teuthology.orchestra.run.vm00.stdout:Get:16 https://archive.ubuntu.com/ubuntu jammy/universe amd64 lua5.1 amd64 5.1.5-8.1build4 [94.6 kB] 2026-03-10T13:40:20.071 INFO:teuthology.orchestra.run.vm08.stdout:Get:7 https://archive.ubuntu.com/ubuntu jammy-updates/universe amd64 libqt5network5 amd64 5.15.3+dfsg-2ubuntu0.2 [731 kB] 2026-03-10T13:40:20.130 INFO:teuthology.orchestra.run.vm07.stdout:Get:81 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 libcephfs2 amd64 19.2.3-678-ge911bdeb-1jammy [979 kB] 2026-03-10T13:40:20.144 INFO:teuthology.orchestra.run.vm00.stdout:Get:17 https://archive.ubuntu.com/ubuntu jammy/universe amd64 lua-any all 27ubuntu1 [5034 B] 2026-03-10T13:40:20.146 INFO:teuthology.orchestra.run.vm07.stdout:Get:82 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 python3-rados amd64 19.2.3-678-ge911bdeb-1jammy [357 kB] 2026-03-10T13:40:20.151 INFO:teuthology.orchestra.run.vm07.stdout:Get:83 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 python3-ceph-argparse all 19.2.3-678-ge911bdeb-1jammy [32.9 kB] 2026-03-10T13:40:20.152 INFO:teuthology.orchestra.run.vm08.stdout:Get:8 https://archive.ubuntu.com/ubuntu jammy/universe amd64 libthrift-0.16.0 amd64 0.16.0-2 [267 kB] 2026-03-10T13:40:20.152 INFO:teuthology.orchestra.run.vm07.stdout:Get:84 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 python3-cephfs amd64 19.2.3-678-ge911bdeb-1jammy [184 kB] 2026-03-10T13:40:20.156 INFO:teuthology.orchestra.run.vm07.stdout:Get:85 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 python3-ceph-common all 19.2.3-678-ge911bdeb-1jammy [70.1 kB] 2026-03-10T13:40:20.156 INFO:teuthology.orchestra.run.vm07.stdout:Get:86 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 python3-rbd amd64 19.2.3-678-ge911bdeb-1jammy [334 kB] 2026-03-10T13:40:20.159 INFO:teuthology.orchestra.run.vm08.stdout:Get:9 https://archive.ubuntu.com/ubuntu jammy/universe amd64 libnbd0 amd64 1.10.5-1 [71.3 kB] 2026-03-10T13:40:20.161 INFO:teuthology.orchestra.run.vm08.stdout:Get:10 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-wcwidth all 0.2.5+dfsg1-1 [21.9 kB] 2026-03-10T13:40:20.162 INFO:teuthology.orchestra.run.vm07.stdout:Get:87 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 librgw2 amd64 19.2.3-678-ge911bdeb-1jammy [6935 kB] 2026-03-10T13:40:20.162 INFO:teuthology.orchestra.run.vm08.stdout:Get:11 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-prettytable all 2.5.0-2 [31.3 kB] 2026-03-10T13:40:20.162 INFO:teuthology.orchestra.run.vm08.stdout:Get:12 https://archive.ubuntu.com/ubuntu jammy/universe amd64 librdkafka1 amd64 1.8.0-1build1 [633 kB] 2026-03-10T13:40:20.173 INFO:teuthology.orchestra.run.vm00.stdout:Get:18 https://archive.ubuntu.com/ubuntu jammy/main amd64 zip amd64 3.0-12build2 [176 kB] 2026-03-10T13:40:20.176 INFO:teuthology.orchestra.run.vm00.stdout:Get:19 https://archive.ubuntu.com/ubuntu jammy-updates/main amd64 unzip amd64 6.0-26ubuntu3.2 [175 kB] 2026-03-10T13:40:20.178 INFO:teuthology.orchestra.run.vm00.stdout:Get:20 https://archive.ubuntu.com/ubuntu jammy/universe amd64 luarocks all 3.8.0+dfsg1-1 [140 kB] 2026-03-10T13:40:20.180 INFO:teuthology.orchestra.run.vm00.stdout:Get:21 https://archive.ubuntu.com/ubuntu jammy-updates/main amd64 liboath0 amd64 2.6.7-3ubuntu0.1 [41.3 kB] 2026-03-10T13:40:20.181 INFO:teuthology.orchestra.run.vm00.stdout:Get:22 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-jaraco.functools all 3.4.0-2 [9030 B] 2026-03-10T13:40:20.181 INFO:teuthology.orchestra.run.vm00.stdout:Get:23 https://archive.ubuntu.com/ubuntu jammy-updates/main amd64 python3-cheroot all 8.5.2+ds1-1ubuntu3.1 [71.1 kB] 2026-03-10T13:40:20.182 INFO:teuthology.orchestra.run.vm00.stdout:Get:24 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-jaraco.classes all 3.2.1-3 [6452 B] 2026-03-10T13:40:20.186 INFO:teuthology.orchestra.run.vm08.stdout:Get:13 https://archive.ubuntu.com/ubuntu jammy/main amd64 libreadline-dev amd64 8.1.2-1 [166 kB] 2026-03-10T13:40:20.192 INFO:teuthology.orchestra.run.vm08.stdout:Get:14 https://archive.ubuntu.com/ubuntu jammy/main amd64 liblua5.3-dev amd64 5.3.6-1build1 [167 kB] 2026-03-10T13:40:20.200 INFO:teuthology.orchestra.run.vm08.stdout:Get:15 https://archive.ubuntu.com/ubuntu jammy/universe amd64 lua5.1 amd64 5.1.5-8.1build4 [94.6 kB] 2026-03-10T13:40:20.247 INFO:teuthology.orchestra.run.vm00.stdout:Get:25 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-jaraco.text all 3.6.0-2 [8716 B] 2026-03-10T13:40:20.247 INFO:teuthology.orchestra.run.vm00.stdout:Get:26 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-jaraco.collections all 3.4.0-2 [11.4 kB] 2026-03-10T13:40:20.258 INFO:teuthology.orchestra.run.vm08.stdout:Get:16 https://archive.ubuntu.com/ubuntu jammy/universe amd64 lua-any all 27ubuntu1 [5034 B] 2026-03-10T13:40:20.258 INFO:teuthology.orchestra.run.vm08.stdout:Get:17 https://archive.ubuntu.com/ubuntu jammy/main amd64 zip amd64 3.0-12build2 [176 kB] 2026-03-10T13:40:20.262 INFO:teuthology.orchestra.run.vm08.stdout:Get:18 https://archive.ubuntu.com/ubuntu jammy-updates/main amd64 unzip amd64 6.0-26ubuntu3.2 [175 kB] 2026-03-10T13:40:20.286 INFO:teuthology.orchestra.run.vm00.stdout:Get:27 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-tempora all 4.1.2-1 [14.8 kB] 2026-03-10T13:40:20.287 INFO:teuthology.orchestra.run.vm00.stdout:Get:28 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-portend all 3.0.0-1 [7240 B] 2026-03-10T13:40:20.287 INFO:teuthology.orchestra.run.vm00.stdout:Get:29 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-zc.lockfile all 2.0-1 [8980 B] 2026-03-10T13:40:20.287 INFO:teuthology.orchestra.run.vm00.stdout:Get:30 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-cherrypy3 all 18.6.1-4 [208 kB] 2026-03-10T13:40:20.288 INFO:teuthology.orchestra.run.vm00.stdout:Get:31 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 libcephfs2 amd64 19.2.3-678-ge911bdeb-1jammy [979 kB] 2026-03-10T13:40:20.356 INFO:teuthology.orchestra.run.vm08.stdout:Get:19 https://archive.ubuntu.com/ubuntu jammy/universe amd64 luarocks all 3.8.0+dfsg1-1 [140 kB] 2026-03-10T13:40:20.363 INFO:teuthology.orchestra.run.vm08.stdout:Get:20 https://archive.ubuntu.com/ubuntu jammy-updates/main amd64 liboath0 amd64 2.6.7-3ubuntu0.1 [41.3 kB] 2026-03-10T13:40:20.364 INFO:teuthology.orchestra.run.vm08.stdout:Get:21 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-jaraco.functools all 3.4.0-2 [9030 B] 2026-03-10T13:40:20.364 INFO:teuthology.orchestra.run.vm08.stdout:Get:22 https://archive.ubuntu.com/ubuntu jammy-updates/main amd64 python3-cheroot all 8.5.2+ds1-1ubuntu3.1 [71.1 kB] 2026-03-10T13:40:20.365 INFO:teuthology.orchestra.run.vm08.stdout:Get:23 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 librados2 amd64 19.2.3-678-ge911bdeb-1jammy [3597 kB] 2026-03-10T13:40:20.366 INFO:teuthology.orchestra.run.vm08.stdout:Get:24 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-jaraco.classes all 3.2.1-3 [6452 B] 2026-03-10T13:40:20.366 INFO:teuthology.orchestra.run.vm08.stdout:Get:25 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-jaraco.text all 3.6.0-2 [8716 B] 2026-03-10T13:40:20.366 INFO:teuthology.orchestra.run.vm08.stdout:Get:26 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-jaraco.collections all 3.4.0-2 [11.4 kB] 2026-03-10T13:40:20.426 INFO:teuthology.orchestra.run.vm00.stdout:Get:32 https://archive.ubuntu.com/ubuntu jammy/universe amd64 python3-natsort all 8.0.2-1 [35.3 kB] 2026-03-10T13:40:20.426 INFO:teuthology.orchestra.run.vm00.stdout:Get:33 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-logutils all 0.3.3-8 [17.6 kB] 2026-03-10T13:40:20.426 INFO:teuthology.orchestra.run.vm00.stdout:Get:34 https://archive.ubuntu.com/ubuntu jammy-updates/main amd64 python3-mako all 1.1.3+ds1-2ubuntu0.1 [60.5 kB] 2026-03-10T13:40:20.427 INFO:teuthology.orchestra.run.vm00.stdout:Get:35 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-simplegeneric all 0.8.1-3 [11.3 kB] 2026-03-10T13:40:20.455 INFO:teuthology.orchestra.run.vm08.stdout:Get:27 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-tempora all 4.1.2-1 [14.8 kB] 2026-03-10T13:40:20.455 INFO:teuthology.orchestra.run.vm08.stdout:Get:28 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-portend all 3.0.0-1 [7240 B] 2026-03-10T13:40:20.455 INFO:teuthology.orchestra.run.vm08.stdout:Get:29 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-zc.lockfile all 2.0-1 [8980 B] 2026-03-10T13:40:20.456 INFO:teuthology.orchestra.run.vm00.stdout:Get:36 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-singledispatch all 3.4.0.3-3 [7320 B] 2026-03-10T13:40:20.474 INFO:teuthology.orchestra.run.vm07.stdout:Get:88 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 python3-rgw amd64 19.2.3-678-ge911bdeb-1jammy [112 kB] 2026-03-10T13:40:20.479 INFO:teuthology.orchestra.run.vm07.stdout:Get:89 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 libradosstriper1 amd64 19.2.3-678-ge911bdeb-1jammy [470 kB] 2026-03-10T13:40:20.483 INFO:teuthology.orchestra.run.vm07.stdout:Get:90 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 ceph-common amd64 19.2.3-678-ge911bdeb-1jammy [26.5 MB] 2026-03-10T13:40:20.525 INFO:teuthology.orchestra.run.vm00.stdout:Get:37 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 python3-rados amd64 19.2.3-678-ge911bdeb-1jammy [357 kB] 2026-03-10T13:40:20.527 INFO:teuthology.orchestra.run.vm00.stdout:Get:38 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 python3-ceph-argparse all 19.2.3-678-ge911bdeb-1jammy [32.9 kB] 2026-03-10T13:40:20.527 INFO:teuthology.orchestra.run.vm00.stdout:Get:39 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 python3-cephfs amd64 19.2.3-678-ge911bdeb-1jammy [184 kB] 2026-03-10T13:40:20.527 INFO:teuthology.orchestra.run.vm00.stdout:Get:40 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 python3-ceph-common all 19.2.3-678-ge911bdeb-1jammy [70.1 kB] 2026-03-10T13:40:20.528 INFO:teuthology.orchestra.run.vm00.stdout:Get:41 https://archive.ubuntu.com/ubuntu jammy-updates/main amd64 python3-webob all 1:1.8.6-1.1ubuntu0.1 [86.7 kB] 2026-03-10T13:40:20.528 INFO:teuthology.orchestra.run.vm00.stdout:Get:42 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 python3-rbd amd64 19.2.3-678-ge911bdeb-1jammy [334 kB] 2026-03-10T13:40:20.529 INFO:teuthology.orchestra.run.vm00.stdout:Get:43 https://archive.ubuntu.com/ubuntu jammy-updates/main amd64 python3-waitress all 1.4.4-1.1ubuntu1.1 [47.0 kB] 2026-03-10T13:40:20.529 INFO:teuthology.orchestra.run.vm00.stdout:Get:44 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 librgw2 amd64 19.2.3-678-ge911bdeb-1jammy [6935 kB] 2026-03-10T13:40:20.530 INFO:teuthology.orchestra.run.vm00.stdout:Get:45 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-tempita all 0.5.2-6ubuntu1 [15.1 kB] 2026-03-10T13:40:20.530 INFO:teuthology.orchestra.run.vm00.stdout:Get:46 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-paste all 3.5.0+dfsg1-1 [456 kB] 2026-03-10T13:40:20.552 INFO:teuthology.orchestra.run.vm08.stdout:Get:30 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-cherrypy3 all 18.6.1-4 [208 kB] 2026-03-10T13:40:20.556 INFO:teuthology.orchestra.run.vm08.stdout:Get:31 https://archive.ubuntu.com/ubuntu jammy/universe amd64 python3-natsort all 8.0.2-1 [35.3 kB] 2026-03-10T13:40:20.556 INFO:teuthology.orchestra.run.vm08.stdout:Get:32 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-logutils all 0.3.3-8 [17.6 kB] 2026-03-10T13:40:20.557 INFO:teuthology.orchestra.run.vm08.stdout:Get:33 https://archive.ubuntu.com/ubuntu jammy-updates/main amd64 python3-mako all 1.1.3+ds1-2ubuntu0.1 [60.5 kB] 2026-03-10T13:40:20.558 INFO:teuthology.orchestra.run.vm08.stdout:Get:34 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-simplegeneric all 0.8.1-3 [11.3 kB] 2026-03-10T13:40:20.558 INFO:teuthology.orchestra.run.vm00.stdout:Get:47 https://archive.ubuntu.com/ubuntu jammy/main amd64 python-pastedeploy-tpl all 2.1.1-1 [4892 B] 2026-03-10T13:40:20.558 INFO:teuthology.orchestra.run.vm00.stdout:Get:48 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-pastedeploy all 2.1.1-1 [26.6 kB] 2026-03-10T13:40:20.558 INFO:teuthology.orchestra.run.vm08.stdout:Get:35 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-singledispatch all 3.4.0.3-3 [7320 B] 2026-03-10T13:40:20.559 INFO:teuthology.orchestra.run.vm00.stdout:Get:49 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-webtest all 2.0.35-1 [28.5 kB] 2026-03-10T13:40:20.559 INFO:teuthology.orchestra.run.vm08.stdout:Get:36 https://archive.ubuntu.com/ubuntu jammy-updates/main amd64 python3-webob all 1:1.8.6-1.1ubuntu0.1 [86.7 kB] 2026-03-10T13:40:20.559 INFO:teuthology.orchestra.run.vm00.stdout:Get:50 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-pecan all 1.3.3-4ubuntu2 [87.3 kB] 2026-03-10T13:40:20.560 INFO:teuthology.orchestra.run.vm00.stdout:Get:51 https://archive.ubuntu.com/ubuntu jammy-updates/main amd64 python3-werkzeug all 2.0.2+dfsg1-1ubuntu0.22.04.3 [181 kB] 2026-03-10T13:40:20.650 INFO:teuthology.orchestra.run.vm08.stdout:Get:37 https://archive.ubuntu.com/ubuntu jammy-updates/main amd64 python3-waitress all 1.4.4-1.1ubuntu1.1 [47.0 kB] 2026-03-10T13:40:20.651 INFO:teuthology.orchestra.run.vm08.stdout:Get:38 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-tempita all 0.5.2-6ubuntu1 [15.1 kB] 2026-03-10T13:40:20.651 INFO:teuthology.orchestra.run.vm08.stdout:Get:39 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-paste all 3.5.0+dfsg1-1 [456 kB] 2026-03-10T13:40:20.662 INFO:teuthology.orchestra.run.vm00.stdout:Get:52 https://archive.ubuntu.com/ubuntu jammy/universe amd64 libfuse2 amd64 2.9.9-5ubuntu3 [90.3 kB] 2026-03-10T13:40:20.663 INFO:teuthology.orchestra.run.vm00.stdout:Get:53 https://archive.ubuntu.com/ubuntu jammy-updates/universe amd64 python3-asyncssh all 2.5.0-1ubuntu0.1 [189 kB] 2026-03-10T13:40:20.667 INFO:teuthology.orchestra.run.vm00.stdout:Get:54 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-repoze.lru all 0.7-2 [12.1 kB] 2026-03-10T13:40:20.667 INFO:teuthology.orchestra.run.vm00.stdout:Get:55 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-routes all 2.5.1-1ubuntu1 [89.0 kB] 2026-03-10T13:40:20.668 INFO:teuthology.orchestra.run.vm00.stdout:Get:56 https://archive.ubuntu.com/ubuntu jammy/universe amd64 python3-sklearn-lib amd64 0.23.2-5ubuntu6 [2058 kB] 2026-03-10T13:40:20.733 INFO:teuthology.orchestra.run.vm08.stdout:Get:40 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 libcephfs2 amd64 19.2.3-678-ge911bdeb-1jammy [979 kB] 2026-03-10T13:40:20.748 INFO:teuthology.orchestra.run.vm08.stdout:Get:41 https://archive.ubuntu.com/ubuntu jammy/main amd64 python-pastedeploy-tpl all 2.1.1-1 [4892 B] 2026-03-10T13:40:20.749 INFO:teuthology.orchestra.run.vm08.stdout:Get:42 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-pastedeploy all 2.1.1-1 [26.6 kB] 2026-03-10T13:40:20.749 INFO:teuthology.orchestra.run.vm08.stdout:Get:43 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-webtest all 2.0.35-1 [28.5 kB] 2026-03-10T13:40:20.749 INFO:teuthology.orchestra.run.vm08.stdout:Get:44 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-pecan all 1.3.3-4ubuntu2 [87.3 kB] 2026-03-10T13:40:20.751 INFO:teuthology.orchestra.run.vm08.stdout:Get:45 https://archive.ubuntu.com/ubuntu jammy-updates/main amd64 python3-werkzeug all 2.0.2+dfsg1-1ubuntu0.22.04.3 [181 kB] 2026-03-10T13:40:20.756 INFO:teuthology.orchestra.run.vm08.stdout:Get:46 https://archive.ubuntu.com/ubuntu jammy/universe amd64 libfuse2 amd64 2.9.9-5ubuntu3 [90.3 kB] 2026-03-10T13:40:20.759 INFO:teuthology.orchestra.run.vm08.stdout:Get:47 https://archive.ubuntu.com/ubuntu jammy-updates/universe amd64 python3-asyncssh all 2.5.0-1ubuntu0.1 [189 kB] 2026-03-10T13:40:20.767 INFO:teuthology.orchestra.run.vm00.stdout:Get:57 https://archive.ubuntu.com/ubuntu jammy/universe amd64 python3-joblib all 0.17.0-4ubuntu1 [204 kB] 2026-03-10T13:40:20.771 INFO:teuthology.orchestra.run.vm00.stdout:Get:58 https://archive.ubuntu.com/ubuntu jammy/universe amd64 python3-threadpoolctl all 3.1.0-1 [21.3 kB] 2026-03-10T13:40:20.771 INFO:teuthology.orchestra.run.vm00.stdout:Get:59 https://archive.ubuntu.com/ubuntu jammy/universe amd64 python3-sklearn all 0.23.2-5ubuntu6 [1829 kB] 2026-03-10T13:40:20.799 INFO:teuthology.orchestra.run.vm00.stdout:Get:60 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-cachetools all 5.0.0-1 [9722 B] 2026-03-10T13:40:20.799 INFO:teuthology.orchestra.run.vm00.stdout:Get:61 https://archive.ubuntu.com/ubuntu jammy/universe amd64 python3-rsa all 4.8-1 [28.4 kB] 2026-03-10T13:40:20.847 INFO:teuthology.orchestra.run.vm08.stdout:Get:48 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-repoze.lru all 0.7-2 [12.1 kB] 2026-03-10T13:40:20.847 INFO:teuthology.orchestra.run.vm08.stdout:Get:49 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-routes all 2.5.1-1ubuntu1 [89.0 kB] 2026-03-10T13:40:20.849 INFO:teuthology.orchestra.run.vm08.stdout:Get:50 https://archive.ubuntu.com/ubuntu jammy/universe amd64 python3-sklearn-lib amd64 0.23.2-5ubuntu6 [2058 kB] 2026-03-10T13:40:20.852 INFO:teuthology.orchestra.run.vm08.stdout:Get:51 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 python3-rados amd64 19.2.3-678-ge911bdeb-1jammy [357 kB] 2026-03-10T13:40:20.858 INFO:teuthology.orchestra.run.vm08.stdout:Get:52 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 python3-ceph-argparse all 19.2.3-678-ge911bdeb-1jammy [32.9 kB] 2026-03-10T13:40:20.858 INFO:teuthology.orchestra.run.vm08.stdout:Get:53 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 python3-cephfs amd64 19.2.3-678-ge911bdeb-1jammy [184 kB] 2026-03-10T13:40:20.872 INFO:teuthology.orchestra.run.vm00.stdout:Get:62 https://archive.ubuntu.com/ubuntu jammy/universe amd64 python3-google-auth all 1.5.1-3 [35.7 kB] 2026-03-10T13:40:20.873 INFO:teuthology.orchestra.run.vm00.stdout:Get:63 https://archive.ubuntu.com/ubuntu jammy/universe amd64 python3-requests-oauthlib all 1.3.0+ds-0.1 [18.7 kB] 2026-03-10T13:40:20.873 INFO:teuthology.orchestra.run.vm00.stdout:Get:64 https://archive.ubuntu.com/ubuntu jammy/universe amd64 python3-websocket all 1.2.3-1 [34.7 kB] 2026-03-10T13:40:20.874 INFO:teuthology.orchestra.run.vm00.stdout:Get:65 https://archive.ubuntu.com/ubuntu jammy/universe amd64 python3-kubernetes all 12.0.1-1ubuntu1 [353 kB] 2026-03-10T13:40:21.016 INFO:teuthology.orchestra.run.vm00.stdout:Get:66 https://archive.ubuntu.com/ubuntu jammy/main amd64 libonig5 amd64 6.9.7.1-2build1 [172 kB] 2026-03-10T13:40:21.016 INFO:teuthology.orchestra.run.vm08.stdout:Get:54 https://archive.ubuntu.com/ubuntu jammy/universe amd64 python3-joblib all 0.17.0-4ubuntu1 [204 kB] 2026-03-10T13:40:21.017 INFO:teuthology.orchestra.run.vm08.stdout:Get:55 https://archive.ubuntu.com/ubuntu jammy/universe amd64 python3-threadpoolctl all 3.1.0-1 [21.3 kB] 2026-03-10T13:40:21.017 INFO:teuthology.orchestra.run.vm08.stdout:Get:56 https://archive.ubuntu.com/ubuntu jammy/universe amd64 python3-sklearn all 0.23.2-5ubuntu6 [1829 kB] 2026-03-10T13:40:21.025 INFO:teuthology.orchestra.run.vm00.stdout:Get:67 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 python3-rgw amd64 19.2.3-678-ge911bdeb-1jammy [112 kB] 2026-03-10T13:40:21.025 INFO:teuthology.orchestra.run.vm00.stdout:Get:68 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 libradosstriper1 amd64 19.2.3-678-ge911bdeb-1jammy [470 kB] 2026-03-10T13:40:21.027 INFO:teuthology.orchestra.run.vm00.stdout:Get:69 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 ceph-common amd64 19.2.3-678-ge911bdeb-1jammy [26.5 MB] 2026-03-10T13:40:21.080 INFO:teuthology.orchestra.run.vm00.stdout:Get:70 https://archive.ubuntu.com/ubuntu jammy-updates/main amd64 libjq1 amd64 1.6-2.1ubuntu3.1 [133 kB] 2026-03-10T13:40:21.081 INFO:teuthology.orchestra.run.vm00.stdout:Get:71 https://archive.ubuntu.com/ubuntu jammy-updates/main amd64 jq amd64 1.6-2.1ubuntu3.1 [52.5 kB] 2026-03-10T13:40:21.082 INFO:teuthology.orchestra.run.vm00.stdout:Get:72 https://archive.ubuntu.com/ubuntu jammy/main amd64 socat amd64 1.7.4.1-3ubuntu4 [349 kB] 2026-03-10T13:40:21.084 INFO:teuthology.orchestra.run.vm00.stdout:Get:73 https://archive.ubuntu.com/ubuntu jammy/universe amd64 xmlstarlet amd64 1.6.1-2.1 [265 kB] 2026-03-10T13:40:21.085 INFO:teuthology.orchestra.run.vm00.stdout:Get:74 https://archive.ubuntu.com/ubuntu jammy/universe amd64 lua-socket amd64 3.0~rc1+git+ac3201d-6 [78.9 kB] 2026-03-10T13:40:21.086 INFO:teuthology.orchestra.run.vm00.stdout:Get:75 https://archive.ubuntu.com/ubuntu jammy/universe amd64 lua-sec amd64 1.0.2-1 [37.6 kB] 2026-03-10T13:40:21.087 INFO:teuthology.orchestra.run.vm00.stdout:Get:76 https://archive.ubuntu.com/ubuntu jammy-updates/main amd64 nvme-cli amd64 1.16-3ubuntu0.3 [474 kB] 2026-03-10T13:40:21.089 INFO:teuthology.orchestra.run.vm08.stdout:Get:57 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 python3-ceph-common all 19.2.3-678-ge911bdeb-1jammy [70.1 kB] 2026-03-10T13:40:21.089 INFO:teuthology.orchestra.run.vm00.stdout:Get:77 https://archive.ubuntu.com/ubuntu jammy/main amd64 pkg-config amd64 0.29.2-1ubuntu3 [48.2 kB] 2026-03-10T13:40:21.090 INFO:teuthology.orchestra.run.vm08.stdout:Get:58 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 python3-rbd amd64 19.2.3-678-ge911bdeb-1jammy [334 kB] 2026-03-10T13:40:21.091 INFO:teuthology.orchestra.run.vm08.stdout:Get:59 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 librgw2 amd64 19.2.3-678-ge911bdeb-1jammy [6935 kB] 2026-03-10T13:40:21.119 INFO:teuthology.orchestra.run.vm08.stdout:Get:60 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-cachetools all 5.0.0-1 [9722 B] 2026-03-10T13:40:21.120 INFO:teuthology.orchestra.run.vm08.stdout:Get:61 https://archive.ubuntu.com/ubuntu jammy/universe amd64 python3-rsa all 4.8-1 [28.4 kB] 2026-03-10T13:40:21.120 INFO:teuthology.orchestra.run.vm08.stdout:Get:62 https://archive.ubuntu.com/ubuntu jammy/universe amd64 python3-google-auth all 1.5.1-3 [35.7 kB] 2026-03-10T13:40:21.120 INFO:teuthology.orchestra.run.vm08.stdout:Get:63 https://archive.ubuntu.com/ubuntu jammy/universe amd64 python3-requests-oauthlib all 1.3.0+ds-0.1 [18.7 kB] 2026-03-10T13:40:21.120 INFO:teuthology.orchestra.run.vm08.stdout:Get:64 https://archive.ubuntu.com/ubuntu jammy/universe amd64 python3-websocket all 1.2.3-1 [34.7 kB] 2026-03-10T13:40:21.121 INFO:teuthology.orchestra.run.vm08.stdout:Get:65 https://archive.ubuntu.com/ubuntu jammy/universe amd64 python3-kubernetes all 12.0.1-1ubuntu1 [353 kB] 2026-03-10T13:40:21.176 INFO:teuthology.orchestra.run.vm08.stdout:Get:66 https://archive.ubuntu.com/ubuntu jammy/main amd64 libonig5 amd64 6.9.7.1-2build1 [172 kB] 2026-03-10T13:40:21.178 INFO:teuthology.orchestra.run.vm08.stdout:Get:67 https://archive.ubuntu.com/ubuntu jammy-updates/main amd64 libjq1 amd64 1.6-2.1ubuntu3.1 [133 kB] 2026-03-10T13:40:21.181 INFO:teuthology.orchestra.run.vm08.stdout:Get:68 https://archive.ubuntu.com/ubuntu jammy-updates/main amd64 jq amd64 1.6-2.1ubuntu3.1 [52.5 kB] 2026-03-10T13:40:21.184 INFO:teuthology.orchestra.run.vm00.stdout:Get:78 https://archive.ubuntu.com/ubuntu jammy-updates/universe amd64 python-asyncssh-doc all 2.5.0-1ubuntu0.1 [309 kB] 2026-03-10T13:40:21.193 INFO:teuthology.orchestra.run.vm00.stdout:Get:79 https://archive.ubuntu.com/ubuntu jammy/universe amd64 python3-iniconfig all 1.1.1-2 [6024 B] 2026-03-10T13:40:21.194 INFO:teuthology.orchestra.run.vm00.stdout:Get:80 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-pastescript all 2.0.2-4 [54.6 kB] 2026-03-10T13:40:21.194 INFO:teuthology.orchestra.run.vm00.stdout:Get:81 https://archive.ubuntu.com/ubuntu jammy/universe amd64 python3-pluggy all 0.13.0-7.1 [19.0 kB] 2026-03-10T13:40:21.194 INFO:teuthology.orchestra.run.vm00.stdout:Get:82 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-psutil amd64 5.9.0-1build1 [158 kB] 2026-03-10T13:40:21.196 INFO:teuthology.orchestra.run.vm00.stdout:Get:83 https://archive.ubuntu.com/ubuntu jammy/universe amd64 python3-py all 1.10.0-1 [71.9 kB] 2026-03-10T13:40:21.197 INFO:teuthology.orchestra.run.vm00.stdout:Get:84 https://archive.ubuntu.com/ubuntu jammy-updates/main amd64 python3-pygments all 2.11.2+dfsg-2ubuntu0.1 [750 kB] 2026-03-10T13:40:21.209 INFO:teuthology.orchestra.run.vm00.stdout:Get:85 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-pyinotify all 0.9.6-1.3 [24.8 kB] 2026-03-10T13:40:21.273 INFO:teuthology.orchestra.run.vm08.stdout:Get:69 https://archive.ubuntu.com/ubuntu jammy/main amd64 socat amd64 1.7.4.1-3ubuntu4 [349 kB] 2026-03-10T13:40:21.278 INFO:teuthology.orchestra.run.vm08.stdout:Get:70 https://archive.ubuntu.com/ubuntu jammy/universe amd64 xmlstarlet amd64 1.6.1-2.1 [265 kB] 2026-03-10T13:40:21.282 INFO:teuthology.orchestra.run.vm08.stdout:Get:71 https://archive.ubuntu.com/ubuntu jammy/universe amd64 lua-socket amd64 3.0~rc1+git+ac3201d-6 [78.9 kB] 2026-03-10T13:40:21.283 INFO:teuthology.orchestra.run.vm08.stdout:Get:72 https://archive.ubuntu.com/ubuntu jammy/universe amd64 lua-sec amd64 1.0.2-1 [37.6 kB] 2026-03-10T13:40:21.284 INFO:teuthology.orchestra.run.vm08.stdout:Get:73 https://archive.ubuntu.com/ubuntu jammy-updates/main amd64 nvme-cli amd64 1.16-3ubuntu0.3 [474 kB] 2026-03-10T13:40:21.287 INFO:teuthology.orchestra.run.vm00.stdout:Get:86 https://archive.ubuntu.com/ubuntu jammy/universe amd64 python3-toml all 0.10.2-1 [16.5 kB] 2026-03-10T13:40:21.288 INFO:teuthology.orchestra.run.vm00.stdout:Get:87 https://archive.ubuntu.com/ubuntu jammy/universe amd64 python3-pytest all 6.2.5-1ubuntu2 [214 kB] 2026-03-10T13:40:21.292 INFO:teuthology.orchestra.run.vm08.stdout:Get:74 https://archive.ubuntu.com/ubuntu jammy/main amd64 pkg-config amd64 0.29.2-1ubuntu3 [48.2 kB] 2026-03-10T13:40:21.372 INFO:teuthology.orchestra.run.vm08.stdout:Get:75 https://archive.ubuntu.com/ubuntu jammy-updates/universe amd64 python-asyncssh-doc all 2.5.0-1ubuntu0.1 [309 kB] 2026-03-10T13:40:21.376 INFO:teuthology.orchestra.run.vm08.stdout:Get:76 https://archive.ubuntu.com/ubuntu jammy/universe amd64 python3-iniconfig all 1.1.1-2 [6024 B] 2026-03-10T13:40:21.376 INFO:teuthology.orchestra.run.vm08.stdout:Get:77 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-pastescript all 2.0.2-4 [54.6 kB] 2026-03-10T13:40:21.377 INFO:teuthology.orchestra.run.vm08.stdout:Get:78 https://archive.ubuntu.com/ubuntu jammy/universe amd64 python3-pluggy all 0.13.0-7.1 [19.0 kB] 2026-03-10T13:40:21.391 INFO:teuthology.orchestra.run.vm00.stdout:Get:88 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-simplejson amd64 3.17.6-1build1 [54.7 kB] 2026-03-10T13:40:21.392 INFO:teuthology.orchestra.run.vm00.stdout:Get:89 https://archive.ubuntu.com/ubuntu jammy/universe amd64 qttranslations5-l10n all 5.15.3-1 [1983 kB] 2026-03-10T13:40:21.420 INFO:teuthology.orchestra.run.vm00.stdout:Get:90 https://archive.ubuntu.com/ubuntu jammy-updates/main amd64 smartmontools amd64 7.2-1ubuntu0.1 [583 kB] 2026-03-10T13:40:21.470 INFO:teuthology.orchestra.run.vm08.stdout:Get:79 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-psutil amd64 5.9.0-1build1 [158 kB] 2026-03-10T13:40:21.474 INFO:teuthology.orchestra.run.vm08.stdout:Get:80 https://archive.ubuntu.com/ubuntu jammy/universe amd64 python3-py all 1.10.0-1 [71.9 kB] 2026-03-10T13:40:21.476 INFO:teuthology.orchestra.run.vm08.stdout:Get:81 https://archive.ubuntu.com/ubuntu jammy-updates/main amd64 python3-pygments all 2.11.2+dfsg-2ubuntu0.1 [750 kB] 2026-03-10T13:40:21.621 INFO:teuthology.orchestra.run.vm08.stdout:Get:82 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-pyinotify all 0.9.6-1.3 [24.8 kB] 2026-03-10T13:40:21.622 INFO:teuthology.orchestra.run.vm08.stdout:Get:83 https://archive.ubuntu.com/ubuntu jammy/universe amd64 python3-toml all 0.10.2-1 [16.5 kB] 2026-03-10T13:40:21.622 INFO:teuthology.orchestra.run.vm08.stdout:Get:84 https://archive.ubuntu.com/ubuntu jammy/universe amd64 python3-pytest all 6.2.5-1ubuntu2 [214 kB] 2026-03-10T13:40:21.622 INFO:teuthology.orchestra.run.vm08.stdout:Get:85 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-simplejson amd64 3.17.6-1build1 [54.7 kB] 2026-03-10T13:40:21.622 INFO:teuthology.orchestra.run.vm08.stdout:Get:86 https://archive.ubuntu.com/ubuntu jammy/universe amd64 qttranslations5-l10n all 5.15.3-1 [1983 kB] 2026-03-10T13:40:21.631 INFO:teuthology.orchestra.run.vm08.stdout:Get:87 https://archive.ubuntu.com/ubuntu jammy-updates/main amd64 smartmontools amd64 7.2-1ubuntu0.1 [583 kB] 2026-03-10T13:40:21.737 INFO:teuthology.orchestra.run.vm07.stdout:Get:91 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 ceph-base amd64 19.2.3-678-ge911bdeb-1jammy [5178 kB] 2026-03-10T13:40:21.851 INFO:teuthology.orchestra.run.vm08.stdout:Get:88 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 python3-rgw amd64 19.2.3-678-ge911bdeb-1jammy [112 kB] 2026-03-10T13:40:21.857 INFO:teuthology.orchestra.run.vm08.stdout:Get:89 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 libradosstriper1 amd64 19.2.3-678-ge911bdeb-1jammy [470 kB] 2026-03-10T13:40:21.960 INFO:teuthology.orchestra.run.vm07.stdout:Get:92 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 ceph-mgr-modules-core all 19.2.3-678-ge911bdeb-1jammy [248 kB] 2026-03-10T13:40:21.965 INFO:teuthology.orchestra.run.vm07.stdout:Get:93 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 libsqlite3-mod-ceph amd64 19.2.3-678-ge911bdeb-1jammy [125 kB] 2026-03-10T13:40:21.966 INFO:teuthology.orchestra.run.vm08.stdout:Get:90 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 ceph-common amd64 19.2.3-678-ge911bdeb-1jammy [26.5 MB] 2026-03-10T13:40:22.007 INFO:teuthology.orchestra.run.vm07.stdout:Get:94 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 ceph-mgr amd64 19.2.3-678-ge911bdeb-1jammy [1081 kB] 2026-03-10T13:40:22.016 INFO:teuthology.orchestra.run.vm07.stdout:Get:95 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 ceph-mon amd64 19.2.3-678-ge911bdeb-1jammy [6239 kB] 2026-03-10T13:40:22.296 INFO:teuthology.orchestra.run.vm07.stdout:Get:96 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 ceph-osd amd64 19.2.3-678-ge911bdeb-1jammy [23.0 MB] 2026-03-10T13:40:23.218 INFO:teuthology.orchestra.run.vm07.stdout:Get:97 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 ceph amd64 19.2.3-678-ge911bdeb-1jammy [14.2 kB] 2026-03-10T13:40:23.221 INFO:teuthology.orchestra.run.vm07.stdout:Get:98 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 ceph-fuse amd64 19.2.3-678-ge911bdeb-1jammy [1173 kB] 2026-03-10T13:40:23.270 INFO:teuthology.orchestra.run.vm07.stdout:Get:99 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 ceph-mds amd64 19.2.3-678-ge911bdeb-1jammy [2503 kB] 2026-03-10T13:40:23.381 INFO:teuthology.orchestra.run.vm07.stdout:Get:100 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 cephadm amd64 19.2.3-678-ge911bdeb-1jammy [798 kB] 2026-03-10T13:40:23.393 INFO:teuthology.orchestra.run.vm07.stdout:Get:101 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 ceph-mgr-cephadm all 19.2.3-678-ge911bdeb-1jammy [157 kB] 2026-03-10T13:40:23.402 INFO:teuthology.orchestra.run.vm07.stdout:Get:102 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 ceph-mgr-dashboard all 19.2.3-678-ge911bdeb-1jammy [2396 kB] 2026-03-10T13:40:23.502 INFO:teuthology.orchestra.run.vm07.stdout:Get:103 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 ceph-mgr-diskprediction-local all 19.2.3-678-ge911bdeb-1jammy [8625 kB] 2026-03-10T13:40:23.721 INFO:teuthology.orchestra.run.vm00.stdout:Get:91 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 ceph-base amd64 19.2.3-678-ge911bdeb-1jammy [5178 kB] 2026-03-10T13:40:23.844 INFO:teuthology.orchestra.run.vm07.stdout:Get:104 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 ceph-mgr-k8sevents all 19.2.3-678-ge911bdeb-1jammy [14.3 kB] 2026-03-10T13:40:23.844 INFO:teuthology.orchestra.run.vm07.stdout:Get:105 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 ceph-test amd64 19.2.3-678-ge911bdeb-1jammy [52.1 MB] 2026-03-10T13:40:24.198 INFO:teuthology.orchestra.run.vm00.stdout:Get:92 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 ceph-mgr-modules-core all 19.2.3-678-ge911bdeb-1jammy [248 kB] 2026-03-10T13:40:24.202 INFO:teuthology.orchestra.run.vm00.stdout:Get:93 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 libsqlite3-mod-ceph amd64 19.2.3-678-ge911bdeb-1jammy [125 kB] 2026-03-10T13:40:24.254 INFO:teuthology.orchestra.run.vm00.stdout:Get:94 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 ceph-mgr amd64 19.2.3-678-ge911bdeb-1jammy [1081 kB] 2026-03-10T13:40:24.316 INFO:teuthology.orchestra.run.vm00.stdout:Get:95 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 ceph-mon amd64 19.2.3-678-ge911bdeb-1jammy [6239 kB] 2026-03-10T13:40:24.974 INFO:teuthology.orchestra.run.vm00.stdout:Get:96 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 ceph-osd amd64 19.2.3-678-ge911bdeb-1jammy [23.0 MB] 2026-03-10T13:40:26.118 INFO:teuthology.orchestra.run.vm07.stdout:Get:106 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 ceph-volume all 19.2.3-678-ge911bdeb-1jammy [135 kB] 2026-03-10T13:40:26.187 INFO:teuthology.orchestra.run.vm07.stdout:Get:107 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 libcephfs-dev amd64 19.2.3-678-ge911bdeb-1jammy [41.0 kB] 2026-03-10T13:40:26.188 INFO:teuthology.orchestra.run.vm07.stdout:Get:108 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 radosgw amd64 19.2.3-678-ge911bdeb-1jammy [13.7 MB] 2026-03-10T13:40:26.207 INFO:teuthology.orchestra.run.vm08.stdout:Get:91 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 ceph-base amd64 19.2.3-678-ge911bdeb-1jammy [5178 kB] 2026-03-10T13:40:26.790 INFO:teuthology.orchestra.run.vm07.stdout:Get:109 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 rbd-fuse amd64 19.2.3-678-ge911bdeb-1jammy [92.2 kB] 2026-03-10T13:40:27.019 INFO:teuthology.orchestra.run.vm08.stdout:Get:92 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 ceph-mgr-modules-core all 19.2.3-678-ge911bdeb-1jammy [248 kB] 2026-03-10T13:40:27.039 INFO:teuthology.orchestra.run.vm08.stdout:Get:93 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 libsqlite3-mod-ceph amd64 19.2.3-678-ge911bdeb-1jammy [125 kB] 2026-03-10T13:40:27.056 INFO:teuthology.orchestra.run.vm08.stdout:Get:94 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 ceph-mgr amd64 19.2.3-678-ge911bdeb-1jammy [1081 kB] 2026-03-10T13:40:27.095 INFO:teuthology.orchestra.run.vm07.stdout:Fetched 178 MB in 8s (21.7 MB/s) 2026-03-10T13:40:27.237 INFO:teuthology.orchestra.run.vm07.stdout:Selecting previously unselected package liblttng-ust1:amd64. 2026-03-10T13:40:27.250 INFO:teuthology.orchestra.run.vm08.stdout:Get:95 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 ceph-mon amd64 19.2.3-678-ge911bdeb-1jammy [6239 kB] 2026-03-10T13:40:27.263 INFO:teuthology.orchestra.run.vm07.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 111717 files and directories currently installed.) 2026-03-10T13:40:27.264 INFO:teuthology.orchestra.run.vm07.stdout:Preparing to unpack .../000-liblttng-ust1_2.13.1-1ubuntu1_amd64.deb ... 2026-03-10T13:40:27.266 INFO:teuthology.orchestra.run.vm07.stdout:Unpacking liblttng-ust1:amd64 (2.13.1-1ubuntu1) ... 2026-03-10T13:40:27.284 INFO:teuthology.orchestra.run.vm07.stdout:Selecting previously unselected package libdouble-conversion3:amd64. 2026-03-10T13:40:27.288 INFO:teuthology.orchestra.run.vm07.stdout:Preparing to unpack .../001-libdouble-conversion3_3.1.7-4_amd64.deb ... 2026-03-10T13:40:27.289 INFO:teuthology.orchestra.run.vm07.stdout:Unpacking libdouble-conversion3:amd64 (3.1.7-4) ... 2026-03-10T13:40:27.302 INFO:teuthology.orchestra.run.vm07.stdout:Selecting previously unselected package libpcre2-16-0:amd64. 2026-03-10T13:40:27.306 INFO:teuthology.orchestra.run.vm07.stdout:Preparing to unpack .../002-libpcre2-16-0_10.39-3ubuntu0.1_amd64.deb ... 2026-03-10T13:40:27.307 INFO:teuthology.orchestra.run.vm07.stdout:Unpacking libpcre2-16-0:amd64 (10.39-3ubuntu0.1) ... 2026-03-10T13:40:27.330 INFO:teuthology.orchestra.run.vm07.stdout:Selecting previously unselected package libqt5core5a:amd64. 2026-03-10T13:40:27.336 INFO:teuthology.orchestra.run.vm07.stdout:Preparing to unpack .../003-libqt5core5a_5.15.3+dfsg-2ubuntu0.2_amd64.deb ... 2026-03-10T13:40:27.340 INFO:teuthology.orchestra.run.vm07.stdout:Unpacking libqt5core5a:amd64 (5.15.3+dfsg-2ubuntu0.2) ... 2026-03-10T13:40:27.379 INFO:teuthology.orchestra.run.vm07.stdout:Selecting previously unselected package libqt5dbus5:amd64. 2026-03-10T13:40:27.383 INFO:teuthology.orchestra.run.vm07.stdout:Preparing to unpack .../004-libqt5dbus5_5.15.3+dfsg-2ubuntu0.2_amd64.deb ... 2026-03-10T13:40:27.384 INFO:teuthology.orchestra.run.vm07.stdout:Unpacking libqt5dbus5:amd64 (5.15.3+dfsg-2ubuntu0.2) ... 2026-03-10T13:40:27.402 INFO:teuthology.orchestra.run.vm07.stdout:Selecting previously unselected package libqt5network5:amd64. 2026-03-10T13:40:27.406 INFO:teuthology.orchestra.run.vm07.stdout:Preparing to unpack .../005-libqt5network5_5.15.3+dfsg-2ubuntu0.2_amd64.deb ... 2026-03-10T13:40:27.407 INFO:teuthology.orchestra.run.vm07.stdout:Unpacking libqt5network5:amd64 (5.15.3+dfsg-2ubuntu0.2) ... 2026-03-10T13:40:27.433 INFO:teuthology.orchestra.run.vm07.stdout:Selecting previously unselected package libthrift-0.16.0:amd64. 2026-03-10T13:40:27.439 INFO:teuthology.orchestra.run.vm07.stdout:Preparing to unpack .../006-libthrift-0.16.0_0.16.0-2_amd64.deb ... 2026-03-10T13:40:27.440 INFO:teuthology.orchestra.run.vm07.stdout:Unpacking libthrift-0.16.0:amd64 (0.16.0-2) ... 2026-03-10T13:40:27.466 INFO:teuthology.orchestra.run.vm07.stdout:Preparing to unpack .../007-librbd1_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-10T13:40:27.468 INFO:teuthology.orchestra.run.vm07.stdout:Unpacking librbd1 (19.2.3-678-ge911bdeb-1jammy) over (17.2.9-0ubuntu0.22.04.2) ... 2026-03-10T13:40:27.543 INFO:teuthology.orchestra.run.vm07.stdout:Preparing to unpack .../008-librados2_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-10T13:40:27.545 INFO:teuthology.orchestra.run.vm07.stdout:Unpacking librados2 (19.2.3-678-ge911bdeb-1jammy) over (17.2.9-0ubuntu0.22.04.2) ... 2026-03-10T13:40:27.612 INFO:teuthology.orchestra.run.vm07.stdout:Selecting previously unselected package libnbd0. 2026-03-10T13:40:27.617 INFO:teuthology.orchestra.run.vm07.stdout:Preparing to unpack .../009-libnbd0_1.10.5-1_amd64.deb ... 2026-03-10T13:40:27.618 INFO:teuthology.orchestra.run.vm07.stdout:Unpacking libnbd0 (1.10.5-1) ... 2026-03-10T13:40:27.632 INFO:teuthology.orchestra.run.vm07.stdout:Selecting previously unselected package libcephfs2. 2026-03-10T13:40:27.637 INFO:teuthology.orchestra.run.vm07.stdout:Preparing to unpack .../010-libcephfs2_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-10T13:40:27.637 INFO:teuthology.orchestra.run.vm07.stdout:Unpacking libcephfs2 (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T13:40:27.664 INFO:teuthology.orchestra.run.vm07.stdout:Selecting previously unselected package python3-rados. 2026-03-10T13:40:27.669 INFO:teuthology.orchestra.run.vm07.stdout:Preparing to unpack .../011-python3-rados_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-10T13:40:27.669 INFO:teuthology.orchestra.run.vm07.stdout:Unpacking python3-rados (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T13:40:27.690 INFO:teuthology.orchestra.run.vm07.stdout:Selecting previously unselected package python3-ceph-argparse. 2026-03-10T13:40:27.695 INFO:teuthology.orchestra.run.vm07.stdout:Preparing to unpack .../012-python3-ceph-argparse_19.2.3-678-ge911bdeb-1jammy_all.deb ... 2026-03-10T13:40:27.696 INFO:teuthology.orchestra.run.vm07.stdout:Unpacking python3-ceph-argparse (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T13:40:27.710 INFO:teuthology.orchestra.run.vm07.stdout:Selecting previously unselected package python3-cephfs. 2026-03-10T13:40:27.714 INFO:teuthology.orchestra.run.vm07.stdout:Preparing to unpack .../013-python3-cephfs_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-10T13:40:27.715 INFO:teuthology.orchestra.run.vm07.stdout:Unpacking python3-cephfs (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T13:40:27.731 INFO:teuthology.orchestra.run.vm07.stdout:Selecting previously unselected package python3-ceph-common. 2026-03-10T13:40:27.736 INFO:teuthology.orchestra.run.vm07.stdout:Preparing to unpack .../014-python3-ceph-common_19.2.3-678-ge911bdeb-1jammy_all.deb ... 2026-03-10T13:40:27.737 INFO:teuthology.orchestra.run.vm07.stdout:Unpacking python3-ceph-common (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T13:40:27.755 INFO:teuthology.orchestra.run.vm07.stdout:Selecting previously unselected package python3-wcwidth. 2026-03-10T13:40:27.760 INFO:teuthology.orchestra.run.vm07.stdout:Preparing to unpack .../015-python3-wcwidth_0.2.5+dfsg1-1_all.deb ... 2026-03-10T13:40:27.761 INFO:teuthology.orchestra.run.vm07.stdout:Unpacking python3-wcwidth (0.2.5+dfsg1-1) ... 2026-03-10T13:40:27.777 INFO:teuthology.orchestra.run.vm07.stdout:Selecting previously unselected package python3-prettytable. 2026-03-10T13:40:27.782 INFO:teuthology.orchestra.run.vm07.stdout:Preparing to unpack .../016-python3-prettytable_2.5.0-2_all.deb ... 2026-03-10T13:40:27.783 INFO:teuthology.orchestra.run.vm07.stdout:Unpacking python3-prettytable (2.5.0-2) ... 2026-03-10T13:40:27.797 INFO:teuthology.orchestra.run.vm07.stdout:Selecting previously unselected package python3-rbd. 2026-03-10T13:40:27.802 INFO:teuthology.orchestra.run.vm07.stdout:Preparing to unpack .../017-python3-rbd_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-10T13:40:27.803 INFO:teuthology.orchestra.run.vm07.stdout:Unpacking python3-rbd (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T13:40:27.818 INFO:teuthology.orchestra.run.vm00.stdout:Get:97 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 ceph amd64 19.2.3-678-ge911bdeb-1jammy [14.2 kB] 2026-03-10T13:40:27.822 INFO:teuthology.orchestra.run.vm07.stdout:Selecting previously unselected package librdkafka1:amd64. 2026-03-10T13:40:27.827 INFO:teuthology.orchestra.run.vm07.stdout:Preparing to unpack .../018-librdkafka1_1.8.0-1build1_amd64.deb ... 2026-03-10T13:40:27.827 INFO:teuthology.orchestra.run.vm07.stdout:Unpacking librdkafka1:amd64 (1.8.0-1build1) ... 2026-03-10T13:40:27.832 INFO:teuthology.orchestra.run.vm00.stdout:Get:98 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 ceph-fuse amd64 19.2.3-678-ge911bdeb-1jammy [1173 kB] 2026-03-10T13:40:27.849 INFO:teuthology.orchestra.run.vm07.stdout:Selecting previously unselected package libreadline-dev:amd64. 2026-03-10T13:40:27.855 INFO:teuthology.orchestra.run.vm07.stdout:Preparing to unpack .../019-libreadline-dev_8.1.2-1_amd64.deb ... 2026-03-10T13:40:27.856 INFO:teuthology.orchestra.run.vm07.stdout:Unpacking libreadline-dev:amd64 (8.1.2-1) ... 2026-03-10T13:40:27.875 INFO:teuthology.orchestra.run.vm07.stdout:Selecting previously unselected package liblua5.3-dev:amd64. 2026-03-10T13:40:27.881 INFO:teuthology.orchestra.run.vm07.stdout:Preparing to unpack .../020-liblua5.3-dev_5.3.6-1build1_amd64.deb ... 2026-03-10T13:40:27.882 INFO:teuthology.orchestra.run.vm07.stdout:Unpacking liblua5.3-dev:amd64 (5.3.6-1build1) ... 2026-03-10T13:40:27.902 INFO:teuthology.orchestra.run.vm07.stdout:Selecting previously unselected package lua5.1. 2026-03-10T13:40:27.907 INFO:teuthology.orchestra.run.vm07.stdout:Preparing to unpack .../021-lua5.1_5.1.5-8.1build4_amd64.deb ... 2026-03-10T13:40:27.908 INFO:teuthology.orchestra.run.vm07.stdout:Unpacking lua5.1 (5.1.5-8.1build4) ... 2026-03-10T13:40:27.926 INFO:teuthology.orchestra.run.vm07.stdout:Selecting previously unselected package lua-any. 2026-03-10T13:40:27.932 INFO:teuthology.orchestra.run.vm07.stdout:Preparing to unpack .../022-lua-any_27ubuntu1_all.deb ... 2026-03-10T13:40:27.933 INFO:teuthology.orchestra.run.vm07.stdout:Unpacking lua-any (27ubuntu1) ... 2026-03-10T13:40:27.945 INFO:teuthology.orchestra.run.vm07.stdout:Selecting previously unselected package zip. 2026-03-10T13:40:27.951 INFO:teuthology.orchestra.run.vm07.stdout:Preparing to unpack .../023-zip_3.0-12build2_amd64.deb ... 2026-03-10T13:40:27.951 INFO:teuthology.orchestra.run.vm07.stdout:Unpacking zip (3.0-12build2) ... 2026-03-10T13:40:27.968 INFO:teuthology.orchestra.run.vm07.stdout:Selecting previously unselected package unzip. 2026-03-10T13:40:27.974 INFO:teuthology.orchestra.run.vm07.stdout:Preparing to unpack .../024-unzip_6.0-26ubuntu3.2_amd64.deb ... 2026-03-10T13:40:27.975 INFO:teuthology.orchestra.run.vm07.stdout:Unpacking unzip (6.0-26ubuntu3.2) ... 2026-03-10T13:40:27.995 INFO:teuthology.orchestra.run.vm07.stdout:Selecting previously unselected package luarocks. 2026-03-10T13:40:28.000 INFO:teuthology.orchestra.run.vm07.stdout:Preparing to unpack .../025-luarocks_3.8.0+dfsg1-1_all.deb ... 2026-03-10T13:40:28.001 INFO:teuthology.orchestra.run.vm07.stdout:Unpacking luarocks (3.8.0+dfsg1-1) ... 2026-03-10T13:40:28.005 INFO:teuthology.orchestra.run.vm00.stdout:Get:99 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 ceph-mds amd64 19.2.3-678-ge911bdeb-1jammy [2503 kB] 2026-03-10T13:40:28.050 INFO:teuthology.orchestra.run.vm07.stdout:Selecting previously unselected package librgw2. 2026-03-10T13:40:28.055 INFO:teuthology.orchestra.run.vm07.stdout:Preparing to unpack .../026-librgw2_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-10T13:40:28.056 INFO:teuthology.orchestra.run.vm07.stdout:Unpacking librgw2 (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T13:40:28.176 INFO:teuthology.orchestra.run.vm07.stdout:Selecting previously unselected package python3-rgw. 2026-03-10T13:40:28.180 INFO:teuthology.orchestra.run.vm07.stdout:Preparing to unpack .../027-python3-rgw_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-10T13:40:28.181 INFO:teuthology.orchestra.run.vm07.stdout:Unpacking python3-rgw (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T13:40:28.197 INFO:teuthology.orchestra.run.vm07.stdout:Selecting previously unselected package liboath0:amd64. 2026-03-10T13:40:28.202 INFO:teuthology.orchestra.run.vm07.stdout:Preparing to unpack .../028-liboath0_2.6.7-3ubuntu0.1_amd64.deb ... 2026-03-10T13:40:28.203 INFO:teuthology.orchestra.run.vm07.stdout:Unpacking liboath0:amd64 (2.6.7-3ubuntu0.1) ... 2026-03-10T13:40:28.204 INFO:teuthology.orchestra.run.vm08.stdout:Get:96 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 ceph-osd amd64 19.2.3-678-ge911bdeb-1jammy [23.0 MB] 2026-03-10T13:40:28.218 INFO:teuthology.orchestra.run.vm07.stdout:Selecting previously unselected package libradosstriper1. 2026-03-10T13:40:28.223 INFO:teuthology.orchestra.run.vm07.stdout:Preparing to unpack .../029-libradosstriper1_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-10T13:40:28.225 INFO:teuthology.orchestra.run.vm07.stdout:Unpacking libradosstriper1 (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T13:40:28.248 INFO:teuthology.orchestra.run.vm07.stdout:Selecting previously unselected package ceph-common. 2026-03-10T13:40:28.254 INFO:teuthology.orchestra.run.vm07.stdout:Preparing to unpack .../030-ceph-common_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-10T13:40:28.254 INFO:teuthology.orchestra.run.vm00.stdout:Get:100 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 cephadm amd64 19.2.3-678-ge911bdeb-1jammy [798 kB] 2026-03-10T13:40:28.255 INFO:teuthology.orchestra.run.vm07.stdout:Unpacking ceph-common (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T13:40:28.355 INFO:teuthology.orchestra.run.vm00.stdout:Get:101 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 ceph-mgr-cephadm all 19.2.3-678-ge911bdeb-1jammy [157 kB] 2026-03-10T13:40:28.366 INFO:teuthology.orchestra.run.vm00.stdout:Get:102 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 ceph-mgr-dashboard all 19.2.3-678-ge911bdeb-1jammy [2396 kB] 2026-03-10T13:40:28.660 INFO:teuthology.orchestra.run.vm07.stdout:Selecting previously unselected package ceph-base. 2026-03-10T13:40:28.665 INFO:teuthology.orchestra.run.vm07.stdout:Preparing to unpack .../031-ceph-base_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-10T13:40:28.670 INFO:teuthology.orchestra.run.vm07.stdout:Unpacking ceph-base (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T13:40:28.681 INFO:teuthology.orchestra.run.vm00.stdout:Get:103 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 ceph-mgr-diskprediction-local all 19.2.3-678-ge911bdeb-1jammy [8625 kB] 2026-03-10T13:40:28.772 INFO:teuthology.orchestra.run.vm07.stdout:Selecting previously unselected package python3-jaraco.functools. 2026-03-10T13:40:28.777 INFO:teuthology.orchestra.run.vm07.stdout:Preparing to unpack .../032-python3-jaraco.functools_3.4.0-2_all.deb ... 2026-03-10T13:40:28.779 INFO:teuthology.orchestra.run.vm07.stdout:Unpacking python3-jaraco.functools (3.4.0-2) ... 2026-03-10T13:40:28.795 INFO:teuthology.orchestra.run.vm07.stdout:Selecting previously unselected package python3-cheroot. 2026-03-10T13:40:28.801 INFO:teuthology.orchestra.run.vm07.stdout:Preparing to unpack .../033-python3-cheroot_8.5.2+ds1-1ubuntu3.1_all.deb ... 2026-03-10T13:40:28.802 INFO:teuthology.orchestra.run.vm07.stdout:Unpacking python3-cheroot (8.5.2+ds1-1ubuntu3.1) ... 2026-03-10T13:40:28.820 INFO:teuthology.orchestra.run.vm07.stdout:Selecting previously unselected package python3-jaraco.classes. 2026-03-10T13:40:28.826 INFO:teuthology.orchestra.run.vm07.stdout:Preparing to unpack .../034-python3-jaraco.classes_3.2.1-3_all.deb ... 2026-03-10T13:40:28.826 INFO:teuthology.orchestra.run.vm07.stdout:Unpacking python3-jaraco.classes (3.2.1-3) ... 2026-03-10T13:40:28.841 INFO:teuthology.orchestra.run.vm07.stdout:Selecting previously unselected package python3-jaraco.text. 2026-03-10T13:40:28.847 INFO:teuthology.orchestra.run.vm07.stdout:Preparing to unpack .../035-python3-jaraco.text_3.6.0-2_all.deb ... 2026-03-10T13:40:28.848 INFO:teuthology.orchestra.run.vm07.stdout:Unpacking python3-jaraco.text (3.6.0-2) ... 2026-03-10T13:40:28.864 INFO:teuthology.orchestra.run.vm07.stdout:Selecting previously unselected package python3-jaraco.collections. 2026-03-10T13:40:28.869 INFO:teuthology.orchestra.run.vm07.stdout:Preparing to unpack .../036-python3-jaraco.collections_3.4.0-2_all.deb ... 2026-03-10T13:40:28.869 INFO:teuthology.orchestra.run.vm07.stdout:Unpacking python3-jaraco.collections (3.4.0-2) ... 2026-03-10T13:40:28.884 INFO:teuthology.orchestra.run.vm07.stdout:Selecting previously unselected package python3-tempora. 2026-03-10T13:40:28.889 INFO:teuthology.orchestra.run.vm07.stdout:Preparing to unpack .../037-python3-tempora_4.1.2-1_all.deb ... 2026-03-10T13:40:28.890 INFO:teuthology.orchestra.run.vm07.stdout:Unpacking python3-tempora (4.1.2-1) ... 2026-03-10T13:40:28.905 INFO:teuthology.orchestra.run.vm07.stdout:Selecting previously unselected package python3-portend. 2026-03-10T13:40:28.911 INFO:teuthology.orchestra.run.vm07.stdout:Preparing to unpack .../038-python3-portend_3.0.0-1_all.deb ... 2026-03-10T13:40:28.912 INFO:teuthology.orchestra.run.vm07.stdout:Unpacking python3-portend (3.0.0-1) ... 2026-03-10T13:40:28.927 INFO:teuthology.orchestra.run.vm07.stdout:Selecting previously unselected package python3-zc.lockfile. 2026-03-10T13:40:28.932 INFO:teuthology.orchestra.run.vm07.stdout:Preparing to unpack .../039-python3-zc.lockfile_2.0-1_all.deb ... 2026-03-10T13:40:28.933 INFO:teuthology.orchestra.run.vm07.stdout:Unpacking python3-zc.lockfile (2.0-1) ... 2026-03-10T13:40:28.950 INFO:teuthology.orchestra.run.vm07.stdout:Selecting previously unselected package python3-cherrypy3. 2026-03-10T13:40:28.955 INFO:teuthology.orchestra.run.vm07.stdout:Preparing to unpack .../040-python3-cherrypy3_18.6.1-4_all.deb ... 2026-03-10T13:40:28.956 INFO:teuthology.orchestra.run.vm07.stdout:Unpacking python3-cherrypy3 (18.6.1-4) ... 2026-03-10T13:40:28.989 INFO:teuthology.orchestra.run.vm07.stdout:Selecting previously unselected package python3-natsort. 2026-03-10T13:40:28.995 INFO:teuthology.orchestra.run.vm07.stdout:Preparing to unpack .../041-python3-natsort_8.0.2-1_all.deb ... 2026-03-10T13:40:28.995 INFO:teuthology.orchestra.run.vm07.stdout:Unpacking python3-natsort (8.0.2-1) ... 2026-03-10T13:40:29.015 INFO:teuthology.orchestra.run.vm07.stdout:Selecting previously unselected package python3-logutils. 2026-03-10T13:40:29.022 INFO:teuthology.orchestra.run.vm07.stdout:Preparing to unpack .../042-python3-logutils_0.3.3-8_all.deb ... 2026-03-10T13:40:29.023 INFO:teuthology.orchestra.run.vm07.stdout:Unpacking python3-logutils (0.3.3-8) ... 2026-03-10T13:40:29.041 INFO:teuthology.orchestra.run.vm07.stdout:Selecting previously unselected package python3-mako. 2026-03-10T13:40:29.047 INFO:teuthology.orchestra.run.vm07.stdout:Preparing to unpack .../043-python3-mako_1.1.3+ds1-2ubuntu0.1_all.deb ... 2026-03-10T13:40:29.048 INFO:teuthology.orchestra.run.vm07.stdout:Unpacking python3-mako (1.1.3+ds1-2ubuntu0.1) ... 2026-03-10T13:40:29.069 INFO:teuthology.orchestra.run.vm07.stdout:Selecting previously unselected package python3-simplegeneric. 2026-03-10T13:40:29.076 INFO:teuthology.orchestra.run.vm07.stdout:Preparing to unpack .../044-python3-simplegeneric_0.8.1-3_all.deb ... 2026-03-10T13:40:29.077 INFO:teuthology.orchestra.run.vm07.stdout:Unpacking python3-simplegeneric (0.8.1-3) ... 2026-03-10T13:40:29.094 INFO:teuthology.orchestra.run.vm07.stdout:Selecting previously unselected package python3-singledispatch. 2026-03-10T13:40:29.099 INFO:teuthology.orchestra.run.vm07.stdout:Preparing to unpack .../045-python3-singledispatch_3.4.0.3-3_all.deb ... 2026-03-10T13:40:29.100 INFO:teuthology.orchestra.run.vm07.stdout:Unpacking python3-singledispatch (3.4.0.3-3) ... 2026-03-10T13:40:29.114 INFO:teuthology.orchestra.run.vm07.stdout:Selecting previously unselected package python3-webob. 2026-03-10T13:40:29.120 INFO:teuthology.orchestra.run.vm07.stdout:Preparing to unpack .../046-python3-webob_1%3a1.8.6-1.1ubuntu0.1_all.deb ... 2026-03-10T13:40:29.121 INFO:teuthology.orchestra.run.vm07.stdout:Unpacking python3-webob (1:1.8.6-1.1ubuntu0.1) ... 2026-03-10T13:40:29.144 INFO:teuthology.orchestra.run.vm07.stdout:Selecting previously unselected package python3-waitress. 2026-03-10T13:40:29.150 INFO:teuthology.orchestra.run.vm07.stdout:Preparing to unpack .../047-python3-waitress_1.4.4-1.1ubuntu1.1_all.deb ... 2026-03-10T13:40:29.152 INFO:teuthology.orchestra.run.vm07.stdout:Unpacking python3-waitress (1.4.4-1.1ubuntu1.1) ... 2026-03-10T13:40:29.170 INFO:teuthology.orchestra.run.vm07.stdout:Selecting previously unselected package python3-tempita. 2026-03-10T13:40:29.175 INFO:teuthology.orchestra.run.vm07.stdout:Preparing to unpack .../048-python3-tempita_0.5.2-6ubuntu1_all.deb ... 2026-03-10T13:40:29.176 INFO:teuthology.orchestra.run.vm07.stdout:Unpacking python3-tempita (0.5.2-6ubuntu1) ... 2026-03-10T13:40:29.190 INFO:teuthology.orchestra.run.vm07.stdout:Selecting previously unselected package python3-paste. 2026-03-10T13:40:29.195 INFO:teuthology.orchestra.run.vm07.stdout:Preparing to unpack .../049-python3-paste_3.5.0+dfsg1-1_all.deb ... 2026-03-10T13:40:29.196 INFO:teuthology.orchestra.run.vm07.stdout:Unpacking python3-paste (3.5.0+dfsg1-1) ... 2026-03-10T13:40:29.228 INFO:teuthology.orchestra.run.vm07.stdout:Selecting previously unselected package python-pastedeploy-tpl. 2026-03-10T13:40:29.233 INFO:teuthology.orchestra.run.vm07.stdout:Preparing to unpack .../050-python-pastedeploy-tpl_2.1.1-1_all.deb ... 2026-03-10T13:40:29.233 INFO:teuthology.orchestra.run.vm07.stdout:Unpacking python-pastedeploy-tpl (2.1.1-1) ... 2026-03-10T13:40:29.247 INFO:teuthology.orchestra.run.vm07.stdout:Selecting previously unselected package python3-pastedeploy. 2026-03-10T13:40:29.252 INFO:teuthology.orchestra.run.vm07.stdout:Preparing to unpack .../051-python3-pastedeploy_2.1.1-1_all.deb ... 2026-03-10T13:40:29.253 INFO:teuthology.orchestra.run.vm07.stdout:Unpacking python3-pastedeploy (2.1.1-1) ... 2026-03-10T13:40:29.268 INFO:teuthology.orchestra.run.vm07.stdout:Selecting previously unselected package python3-webtest. 2026-03-10T13:40:29.273 INFO:teuthology.orchestra.run.vm07.stdout:Preparing to unpack .../052-python3-webtest_2.0.35-1_all.deb ... 2026-03-10T13:40:29.274 INFO:teuthology.orchestra.run.vm07.stdout:Unpacking python3-webtest (2.0.35-1) ... 2026-03-10T13:40:29.289 INFO:teuthology.orchestra.run.vm07.stdout:Selecting previously unselected package python3-pecan. 2026-03-10T13:40:29.294 INFO:teuthology.orchestra.run.vm07.stdout:Preparing to unpack .../053-python3-pecan_1.3.3-4ubuntu2_all.deb ... 2026-03-10T13:40:29.295 INFO:teuthology.orchestra.run.vm07.stdout:Unpacking python3-pecan (1.3.3-4ubuntu2) ... 2026-03-10T13:40:29.327 INFO:teuthology.orchestra.run.vm07.stdout:Selecting previously unselected package python3-werkzeug. 2026-03-10T13:40:29.333 INFO:teuthology.orchestra.run.vm07.stdout:Preparing to unpack .../054-python3-werkzeug_2.0.2+dfsg1-1ubuntu0.22.04.3_all.deb ... 2026-03-10T13:40:29.334 INFO:teuthology.orchestra.run.vm07.stdout:Unpacking python3-werkzeug (2.0.2+dfsg1-1ubuntu0.22.04.3) ... 2026-03-10T13:40:29.358 INFO:teuthology.orchestra.run.vm07.stdout:Selecting previously unselected package ceph-mgr-modules-core. 2026-03-10T13:40:29.364 INFO:teuthology.orchestra.run.vm07.stdout:Preparing to unpack .../055-ceph-mgr-modules-core_19.2.3-678-ge911bdeb-1jammy_all.deb ... 2026-03-10T13:40:29.365 INFO:teuthology.orchestra.run.vm07.stdout:Unpacking ceph-mgr-modules-core (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T13:40:29.404 INFO:teuthology.orchestra.run.vm07.stdout:Selecting previously unselected package libsqlite3-mod-ceph. 2026-03-10T13:40:29.410 INFO:teuthology.orchestra.run.vm07.stdout:Preparing to unpack .../056-libsqlite3-mod-ceph_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-10T13:40:29.411 INFO:teuthology.orchestra.run.vm07.stdout:Unpacking libsqlite3-mod-ceph (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T13:40:29.427 INFO:teuthology.orchestra.run.vm07.stdout:Selecting previously unselected package ceph-mgr. 2026-03-10T13:40:29.433 INFO:teuthology.orchestra.run.vm07.stdout:Preparing to unpack .../057-ceph-mgr_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-10T13:40:29.433 INFO:teuthology.orchestra.run.vm07.stdout:Unpacking ceph-mgr (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T13:40:29.467 INFO:teuthology.orchestra.run.vm07.stdout:Selecting previously unselected package ceph-mon. 2026-03-10T13:40:29.472 INFO:teuthology.orchestra.run.vm07.stdout:Preparing to unpack .../058-ceph-mon_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-10T13:40:29.473 INFO:teuthology.orchestra.run.vm07.stdout:Unpacking ceph-mon (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T13:40:29.570 INFO:teuthology.orchestra.run.vm07.stdout:Selecting previously unselected package libfuse2:amd64. 2026-03-10T13:40:29.575 INFO:teuthology.orchestra.run.vm07.stdout:Preparing to unpack .../059-libfuse2_2.9.9-5ubuntu3_amd64.deb ... 2026-03-10T13:40:29.576 INFO:teuthology.orchestra.run.vm07.stdout:Unpacking libfuse2:amd64 (2.9.9-5ubuntu3) ... 2026-03-10T13:40:29.592 INFO:teuthology.orchestra.run.vm07.stdout:Selecting previously unselected package ceph-osd. 2026-03-10T13:40:29.596 INFO:teuthology.orchestra.run.vm07.stdout:Preparing to unpack .../060-ceph-osd_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-10T13:40:29.597 INFO:teuthology.orchestra.run.vm07.stdout:Unpacking ceph-osd (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T13:40:29.684 INFO:teuthology.orchestra.run.vm00.stdout:Get:104 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 ceph-mgr-k8sevents all 19.2.3-678-ge911bdeb-1jammy [14.3 kB] 2026-03-10T13:40:29.684 INFO:teuthology.orchestra.run.vm00.stdout:Get:105 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 ceph-test amd64 19.2.3-678-ge911bdeb-1jammy [52.1 MB] 2026-03-10T13:40:29.876 INFO:teuthology.orchestra.run.vm07.stdout:Selecting previously unselected package ceph. 2026-03-10T13:40:29.883 INFO:teuthology.orchestra.run.vm07.stdout:Preparing to unpack .../061-ceph_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-10T13:40:29.883 INFO:teuthology.orchestra.run.vm07.stdout:Unpacking ceph (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T13:40:29.899 INFO:teuthology.orchestra.run.vm07.stdout:Selecting previously unselected package ceph-fuse. 2026-03-10T13:40:29.905 INFO:teuthology.orchestra.run.vm07.stdout:Preparing to unpack .../062-ceph-fuse_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-10T13:40:29.905 INFO:teuthology.orchestra.run.vm07.stdout:Unpacking ceph-fuse (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T13:40:29.937 INFO:teuthology.orchestra.run.vm07.stdout:Selecting previously unselected package ceph-mds. 2026-03-10T13:40:29.942 INFO:teuthology.orchestra.run.vm07.stdout:Preparing to unpack .../063-ceph-mds_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-10T13:40:29.943 INFO:teuthology.orchestra.run.vm07.stdout:Unpacking ceph-mds (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T13:40:29.995 INFO:teuthology.orchestra.run.vm07.stdout:Selecting previously unselected package cephadm. 2026-03-10T13:40:30.001 INFO:teuthology.orchestra.run.vm07.stdout:Preparing to unpack .../064-cephadm_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-10T13:40:30.003 INFO:teuthology.orchestra.run.vm07.stdout:Unpacking cephadm (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T13:40:30.021 INFO:teuthology.orchestra.run.vm07.stdout:Selecting previously unselected package python3-asyncssh. 2026-03-10T13:40:30.027 INFO:teuthology.orchestra.run.vm07.stdout:Preparing to unpack .../065-python3-asyncssh_2.5.0-1ubuntu0.1_all.deb ... 2026-03-10T13:40:30.027 INFO:teuthology.orchestra.run.vm07.stdout:Unpacking python3-asyncssh (2.5.0-1ubuntu0.1) ... 2026-03-10T13:40:30.054 INFO:teuthology.orchestra.run.vm07.stdout:Selecting previously unselected package ceph-mgr-cephadm. 2026-03-10T13:40:30.059 INFO:teuthology.orchestra.run.vm07.stdout:Preparing to unpack .../066-ceph-mgr-cephadm_19.2.3-678-ge911bdeb-1jammy_all.deb ... 2026-03-10T13:40:30.060 INFO:teuthology.orchestra.run.vm07.stdout:Unpacking ceph-mgr-cephadm (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T13:40:30.083 INFO:teuthology.orchestra.run.vm07.stdout:Selecting previously unselected package python3-repoze.lru. 2026-03-10T13:40:30.089 INFO:teuthology.orchestra.run.vm07.stdout:Preparing to unpack .../067-python3-repoze.lru_0.7-2_all.deb ... 2026-03-10T13:40:30.089 INFO:teuthology.orchestra.run.vm07.stdout:Unpacking python3-repoze.lru (0.7-2) ... 2026-03-10T13:40:30.106 INFO:teuthology.orchestra.run.vm07.stdout:Selecting previously unselected package python3-routes. 2026-03-10T13:40:30.111 INFO:teuthology.orchestra.run.vm07.stdout:Preparing to unpack .../068-python3-routes_2.5.1-1ubuntu1_all.deb ... 2026-03-10T13:40:30.112 INFO:teuthology.orchestra.run.vm07.stdout:Unpacking python3-routes (2.5.1-1ubuntu1) ... 2026-03-10T13:40:30.141 INFO:teuthology.orchestra.run.vm07.stdout:Selecting previously unselected package ceph-mgr-dashboard. 2026-03-10T13:40:30.147 INFO:teuthology.orchestra.run.vm07.stdout:Preparing to unpack .../069-ceph-mgr-dashboard_19.2.3-678-ge911bdeb-1jammy_all.deb ... 2026-03-10T13:40:30.299 INFO:teuthology.orchestra.run.vm07.stdout:Unpacking ceph-mgr-dashboard (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T13:40:30.701 INFO:teuthology.orchestra.run.vm07.stdout:Selecting previously unselected package python3-sklearn-lib:amd64. 2026-03-10T13:40:30.707 INFO:teuthology.orchestra.run.vm07.stdout:Preparing to unpack .../070-python3-sklearn-lib_0.23.2-5ubuntu6_amd64.deb ... 2026-03-10T13:40:30.709 INFO:teuthology.orchestra.run.vm07.stdout:Unpacking python3-sklearn-lib:amd64 (0.23.2-5ubuntu6) ... 2026-03-10T13:40:30.770 INFO:teuthology.orchestra.run.vm07.stdout:Selecting previously unselected package python3-joblib. 2026-03-10T13:40:30.775 INFO:teuthology.orchestra.run.vm07.stdout:Preparing to unpack .../071-python3-joblib_0.17.0-4ubuntu1_all.deb ... 2026-03-10T13:40:30.776 INFO:teuthology.orchestra.run.vm07.stdout:Unpacking python3-joblib (0.17.0-4ubuntu1) ... 2026-03-10T13:40:30.810 INFO:teuthology.orchestra.run.vm07.stdout:Selecting previously unselected package python3-threadpoolctl. 2026-03-10T13:40:30.816 INFO:teuthology.orchestra.run.vm07.stdout:Preparing to unpack .../072-python3-threadpoolctl_3.1.0-1_all.deb ... 2026-03-10T13:40:30.817 INFO:teuthology.orchestra.run.vm07.stdout:Unpacking python3-threadpoolctl (3.1.0-1) ... 2026-03-10T13:40:30.833 INFO:teuthology.orchestra.run.vm07.stdout:Selecting previously unselected package python3-sklearn. 2026-03-10T13:40:30.838 INFO:teuthology.orchestra.run.vm07.stdout:Preparing to unpack .../073-python3-sklearn_0.23.2-5ubuntu6_all.deb ... 2026-03-10T13:40:30.840 INFO:teuthology.orchestra.run.vm07.stdout:Unpacking python3-sklearn (0.23.2-5ubuntu6) ... 2026-03-10T13:40:30.976 INFO:teuthology.orchestra.run.vm07.stdout:Selecting previously unselected package ceph-mgr-diskprediction-local. 2026-03-10T13:40:30.981 INFO:teuthology.orchestra.run.vm07.stdout:Preparing to unpack .../074-ceph-mgr-diskprediction-local_19.2.3-678-ge911bdeb-1jammy_all.deb ... 2026-03-10T13:40:30.983 INFO:teuthology.orchestra.run.vm07.stdout:Unpacking ceph-mgr-diskprediction-local (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T13:40:31.272 INFO:teuthology.orchestra.run.vm07.stdout:Selecting previously unselected package python3-cachetools. 2026-03-10T13:40:31.275 INFO:teuthology.orchestra.run.vm07.stdout:Preparing to unpack .../075-python3-cachetools_5.0.0-1_all.deb ... 2026-03-10T13:40:31.276 INFO:teuthology.orchestra.run.vm07.stdout:Unpacking python3-cachetools (5.0.0-1) ... 2026-03-10T13:40:31.296 INFO:teuthology.orchestra.run.vm07.stdout:Selecting previously unselected package python3-rsa. 2026-03-10T13:40:31.298 INFO:teuthology.orchestra.run.vm07.stdout:Preparing to unpack .../076-python3-rsa_4.8-1_all.deb ... 2026-03-10T13:40:31.299 INFO:teuthology.orchestra.run.vm07.stdout:Unpacking python3-rsa (4.8-1) ... 2026-03-10T13:40:31.323 INFO:teuthology.orchestra.run.vm07.stdout:Selecting previously unselected package python3-google-auth. 2026-03-10T13:40:31.329 INFO:teuthology.orchestra.run.vm07.stdout:Preparing to unpack .../077-python3-google-auth_1.5.1-3_all.deb ... 2026-03-10T13:40:31.329 INFO:teuthology.orchestra.run.vm07.stdout:Unpacking python3-google-auth (1.5.1-3) ... 2026-03-10T13:40:31.351 INFO:teuthology.orchestra.run.vm07.stdout:Selecting previously unselected package python3-requests-oauthlib. 2026-03-10T13:40:31.357 INFO:teuthology.orchestra.run.vm07.stdout:Preparing to unpack .../078-python3-requests-oauthlib_1.3.0+ds-0.1_all.deb ... 2026-03-10T13:40:31.358 INFO:teuthology.orchestra.run.vm07.stdout:Unpacking python3-requests-oauthlib (1.3.0+ds-0.1) ... 2026-03-10T13:40:31.376 INFO:teuthology.orchestra.run.vm07.stdout:Selecting previously unselected package python3-websocket. 2026-03-10T13:40:31.382 INFO:teuthology.orchestra.run.vm07.stdout:Preparing to unpack .../079-python3-websocket_1.2.3-1_all.deb ... 2026-03-10T13:40:31.382 INFO:teuthology.orchestra.run.vm07.stdout:Unpacking python3-websocket (1.2.3-1) ... 2026-03-10T13:40:31.403 INFO:teuthology.orchestra.run.vm07.stdout:Selecting previously unselected package python3-kubernetes. 2026-03-10T13:40:31.408 INFO:teuthology.orchestra.run.vm07.stdout:Preparing to unpack .../080-python3-kubernetes_12.0.1-1ubuntu1_all.deb ... 2026-03-10T13:40:31.421 INFO:teuthology.orchestra.run.vm07.stdout:Unpacking python3-kubernetes (12.0.1-1ubuntu1) ... 2026-03-10T13:40:31.584 INFO:teuthology.orchestra.run.vm07.stdout:Selecting previously unselected package ceph-mgr-k8sevents. 2026-03-10T13:40:31.590 INFO:teuthology.orchestra.run.vm07.stdout:Preparing to unpack .../081-ceph-mgr-k8sevents_19.2.3-678-ge911bdeb-1jammy_all.deb ... 2026-03-10T13:40:31.591 INFO:teuthology.orchestra.run.vm07.stdout:Unpacking ceph-mgr-k8sevents (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T13:40:31.608 INFO:teuthology.orchestra.run.vm07.stdout:Selecting previously unselected package libonig5:amd64. 2026-03-10T13:40:31.614 INFO:teuthology.orchestra.run.vm07.stdout:Preparing to unpack .../082-libonig5_6.9.7.1-2build1_amd64.deb ... 2026-03-10T13:40:31.615 INFO:teuthology.orchestra.run.vm07.stdout:Unpacking libonig5:amd64 (6.9.7.1-2build1) ... 2026-03-10T13:40:31.634 INFO:teuthology.orchestra.run.vm07.stdout:Selecting previously unselected package libjq1:amd64. 2026-03-10T13:40:31.639 INFO:teuthology.orchestra.run.vm07.stdout:Preparing to unpack .../083-libjq1_1.6-2.1ubuntu3.1_amd64.deb ... 2026-03-10T13:40:31.640 INFO:teuthology.orchestra.run.vm07.stdout:Unpacking libjq1:amd64 (1.6-2.1ubuntu3.1) ... 2026-03-10T13:40:31.655 INFO:teuthology.orchestra.run.vm07.stdout:Selecting previously unselected package jq. 2026-03-10T13:40:31.660 INFO:teuthology.orchestra.run.vm07.stdout:Preparing to unpack .../084-jq_1.6-2.1ubuntu3.1_amd64.deb ... 2026-03-10T13:40:31.661 INFO:teuthology.orchestra.run.vm07.stdout:Unpacking jq (1.6-2.1ubuntu3.1) ... 2026-03-10T13:40:31.677 INFO:teuthology.orchestra.run.vm07.stdout:Selecting previously unselected package socat. 2026-03-10T13:40:31.682 INFO:teuthology.orchestra.run.vm07.stdout:Preparing to unpack .../085-socat_1.7.4.1-3ubuntu4_amd64.deb ... 2026-03-10T13:40:31.683 INFO:teuthology.orchestra.run.vm07.stdout:Unpacking socat (1.7.4.1-3ubuntu4) ... 2026-03-10T13:40:31.712 INFO:teuthology.orchestra.run.vm07.stdout:Selecting previously unselected package xmlstarlet. 2026-03-10T13:40:31.719 INFO:teuthology.orchestra.run.vm07.stdout:Preparing to unpack .../086-xmlstarlet_1.6.1-2.1_amd64.deb ... 2026-03-10T13:40:31.720 INFO:teuthology.orchestra.run.vm07.stdout:Unpacking xmlstarlet (1.6.1-2.1) ... 2026-03-10T13:40:31.764 INFO:teuthology.orchestra.run.vm08.stdout:Get:97 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 ceph amd64 19.2.3-678-ge911bdeb-1jammy [14.2 kB] 2026-03-10T13:40:31.766 INFO:teuthology.orchestra.run.vm08.stdout:Get:98 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 ceph-fuse amd64 19.2.3-678-ge911bdeb-1jammy [1173 kB] 2026-03-10T13:40:31.769 INFO:teuthology.orchestra.run.vm07.stdout:Selecting previously unselected package ceph-test. 2026-03-10T13:40:31.774 INFO:teuthology.orchestra.run.vm07.stdout:Preparing to unpack .../087-ceph-test_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-10T13:40:31.775 INFO:teuthology.orchestra.run.vm07.stdout:Unpacking ceph-test (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T13:40:32.011 INFO:teuthology.orchestra.run.vm08.stdout:Get:99 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 ceph-mds amd64 19.2.3-678-ge911bdeb-1jammy [2503 kB] 2026-03-10T13:40:32.503 INFO:teuthology.orchestra.run.vm08.stdout:Get:100 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 cephadm amd64 19.2.3-678-ge911bdeb-1jammy [798 kB] 2026-03-10T13:40:32.624 INFO:teuthology.orchestra.run.vm08.stdout:Get:101 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 ceph-mgr-cephadm all 19.2.3-678-ge911bdeb-1jammy [157 kB] 2026-03-10T13:40:32.634 INFO:teuthology.orchestra.run.vm08.stdout:Get:102 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 ceph-mgr-dashboard all 19.2.3-678-ge911bdeb-1jammy [2396 kB] 2026-03-10T13:40:32.647 INFO:teuthology.orchestra.run.vm07.stdout:Selecting previously unselected package ceph-volume. 2026-03-10T13:40:32.653 INFO:teuthology.orchestra.run.vm07.stdout:Preparing to unpack .../088-ceph-volume_19.2.3-678-ge911bdeb-1jammy_all.deb ... 2026-03-10T13:40:32.654 INFO:teuthology.orchestra.run.vm07.stdout:Unpacking ceph-volume (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T13:40:32.683 INFO:teuthology.orchestra.run.vm07.stdout:Selecting previously unselected package libcephfs-dev. 2026-03-10T13:40:32.689 INFO:teuthology.orchestra.run.vm07.stdout:Preparing to unpack .../089-libcephfs-dev_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-10T13:40:32.690 INFO:teuthology.orchestra.run.vm07.stdout:Unpacking libcephfs-dev (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T13:40:32.706 INFO:teuthology.orchestra.run.vm07.stdout:Selecting previously unselected package lua-socket:amd64. 2026-03-10T13:40:32.713 INFO:teuthology.orchestra.run.vm07.stdout:Preparing to unpack .../090-lua-socket_3.0~rc1+git+ac3201d-6_amd64.deb ... 2026-03-10T13:40:32.714 INFO:teuthology.orchestra.run.vm07.stdout:Unpacking lua-socket:amd64 (3.0~rc1+git+ac3201d-6) ... 2026-03-10T13:40:32.740 INFO:teuthology.orchestra.run.vm07.stdout:Selecting previously unselected package lua-sec:amd64. 2026-03-10T13:40:32.745 INFO:teuthology.orchestra.run.vm07.stdout:Preparing to unpack .../091-lua-sec_1.0.2-1_amd64.deb ... 2026-03-10T13:40:32.746 INFO:teuthology.orchestra.run.vm07.stdout:Unpacking lua-sec:amd64 (1.0.2-1) ... 2026-03-10T13:40:32.766 INFO:teuthology.orchestra.run.vm07.stdout:Selecting previously unselected package nvme-cli. 2026-03-10T13:40:32.772 INFO:teuthology.orchestra.run.vm07.stdout:Preparing to unpack .../092-nvme-cli_1.16-3ubuntu0.3_amd64.deb ... 2026-03-10T13:40:32.773 INFO:teuthology.orchestra.run.vm07.stdout:Unpacking nvme-cli (1.16-3ubuntu0.3) ... 2026-03-10T13:40:32.817 INFO:teuthology.orchestra.run.vm07.stdout:Selecting previously unselected package pkg-config. 2026-03-10T13:40:32.822 INFO:teuthology.orchestra.run.vm07.stdout:Preparing to unpack .../093-pkg-config_0.29.2-1ubuntu3_amd64.deb ... 2026-03-10T13:40:32.823 INFO:teuthology.orchestra.run.vm07.stdout:Unpacking pkg-config (0.29.2-1ubuntu3) ... 2026-03-10T13:40:32.839 INFO:teuthology.orchestra.run.vm07.stdout:Selecting previously unselected package python-asyncssh-doc. 2026-03-10T13:40:32.844 INFO:teuthology.orchestra.run.vm07.stdout:Preparing to unpack .../094-python-asyncssh-doc_2.5.0-1ubuntu0.1_all.deb ... 2026-03-10T13:40:32.845 INFO:teuthology.orchestra.run.vm07.stdout:Unpacking python-asyncssh-doc (2.5.0-1ubuntu0.1) ... 2026-03-10T13:40:32.889 INFO:teuthology.orchestra.run.vm07.stdout:Selecting previously unselected package python3-iniconfig. 2026-03-10T13:40:32.895 INFO:teuthology.orchestra.run.vm07.stdout:Preparing to unpack .../095-python3-iniconfig_1.1.1-2_all.deb ... 2026-03-10T13:40:32.896 INFO:teuthology.orchestra.run.vm07.stdout:Unpacking python3-iniconfig (1.1.1-2) ... 2026-03-10T13:40:32.913 INFO:teuthology.orchestra.run.vm07.stdout:Selecting previously unselected package python3-pastescript. 2026-03-10T13:40:32.920 INFO:teuthology.orchestra.run.vm07.stdout:Preparing to unpack .../096-python3-pastescript_2.0.2-4_all.deb ... 2026-03-10T13:40:32.921 INFO:teuthology.orchestra.run.vm07.stdout:Unpacking python3-pastescript (2.0.2-4) ... 2026-03-10T13:40:32.942 INFO:teuthology.orchestra.run.vm07.stdout:Selecting previously unselected package python3-pluggy. 2026-03-10T13:40:32.949 INFO:teuthology.orchestra.run.vm07.stdout:Preparing to unpack .../097-python3-pluggy_0.13.0-7.1_all.deb ... 2026-03-10T13:40:32.950 INFO:teuthology.orchestra.run.vm07.stdout:Unpacking python3-pluggy (0.13.0-7.1) ... 2026-03-10T13:40:32.967 INFO:teuthology.orchestra.run.vm07.stdout:Selecting previously unselected package python3-psutil. 2026-03-10T13:40:32.973 INFO:teuthology.orchestra.run.vm07.stdout:Preparing to unpack .../098-python3-psutil_5.9.0-1build1_amd64.deb ... 2026-03-10T13:40:32.974 INFO:teuthology.orchestra.run.vm07.stdout:Unpacking python3-psutil (5.9.0-1build1) ... 2026-03-10T13:40:32.995 INFO:teuthology.orchestra.run.vm08.stdout:Get:103 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 ceph-mgr-diskprediction-local all 19.2.3-678-ge911bdeb-1jammy [8625 kB] 2026-03-10T13:40:32.999 INFO:teuthology.orchestra.run.vm07.stdout:Selecting previously unselected package python3-py. 2026-03-10T13:40:33.005 INFO:teuthology.orchestra.run.vm07.stdout:Preparing to unpack .../099-python3-py_1.10.0-1_all.deb ... 2026-03-10T13:40:33.006 INFO:teuthology.orchestra.run.vm07.stdout:Unpacking python3-py (1.10.0-1) ... 2026-03-10T13:40:33.029 INFO:teuthology.orchestra.run.vm07.stdout:Selecting previously unselected package python3-pygments. 2026-03-10T13:40:33.035 INFO:teuthology.orchestra.run.vm07.stdout:Preparing to unpack .../100-python3-pygments_2.11.2+dfsg-2ubuntu0.1_all.deb ... 2026-03-10T13:40:33.036 INFO:teuthology.orchestra.run.vm07.stdout:Unpacking python3-pygments (2.11.2+dfsg-2ubuntu0.1) ... 2026-03-10T13:40:33.111 INFO:teuthology.orchestra.run.vm07.stdout:Selecting previously unselected package python3-pyinotify. 2026-03-10T13:40:33.118 INFO:teuthology.orchestra.run.vm07.stdout:Preparing to unpack .../101-python3-pyinotify_0.9.6-1.3_all.deb ... 2026-03-10T13:40:33.119 INFO:teuthology.orchestra.run.vm07.stdout:Unpacking python3-pyinotify (0.9.6-1.3) ... 2026-03-10T13:40:33.136 INFO:teuthology.orchestra.run.vm07.stdout:Selecting previously unselected package python3-toml. 2026-03-10T13:40:33.142 INFO:teuthology.orchestra.run.vm07.stdout:Preparing to unpack .../102-python3-toml_0.10.2-1_all.deb ... 2026-03-10T13:40:33.143 INFO:teuthology.orchestra.run.vm07.stdout:Unpacking python3-toml (0.10.2-1) ... 2026-03-10T13:40:33.161 INFO:teuthology.orchestra.run.vm07.stdout:Selecting previously unselected package python3-pytest. 2026-03-10T13:40:33.167 INFO:teuthology.orchestra.run.vm07.stdout:Preparing to unpack .../103-python3-pytest_6.2.5-1ubuntu2_all.deb ... 2026-03-10T13:40:33.168 INFO:teuthology.orchestra.run.vm07.stdout:Unpacking python3-pytest (6.2.5-1ubuntu2) ... 2026-03-10T13:40:33.196 INFO:teuthology.orchestra.run.vm07.stdout:Selecting previously unselected package python3-simplejson. 2026-03-10T13:40:33.201 INFO:teuthology.orchestra.run.vm07.stdout:Preparing to unpack .../104-python3-simplejson_3.17.6-1build1_amd64.deb ... 2026-03-10T13:40:33.202 INFO:teuthology.orchestra.run.vm07.stdout:Unpacking python3-simplejson (3.17.6-1build1) ... 2026-03-10T13:40:33.219 INFO:teuthology.orchestra.run.vm07.stdout:Selecting previously unselected package qttranslations5-l10n. 2026-03-10T13:40:33.225 INFO:teuthology.orchestra.run.vm07.stdout:Preparing to unpack .../105-qttranslations5-l10n_5.15.3-1_all.deb ... 2026-03-10T13:40:33.226 INFO:teuthology.orchestra.run.vm07.stdout:Unpacking qttranslations5-l10n (5.15.3-1) ... 2026-03-10T13:40:33.340 INFO:teuthology.orchestra.run.vm07.stdout:Selecting previously unselected package radosgw. 2026-03-10T13:40:33.344 INFO:teuthology.orchestra.run.vm07.stdout:Preparing to unpack .../106-radosgw_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-10T13:40:33.346 INFO:teuthology.orchestra.run.vm07.stdout:Unpacking radosgw (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T13:40:33.656 INFO:teuthology.orchestra.run.vm07.stdout:Selecting previously unselected package rbd-fuse. 2026-03-10T13:40:33.659 INFO:teuthology.orchestra.run.vm07.stdout:Preparing to unpack .../107-rbd-fuse_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-10T13:40:33.660 INFO:teuthology.orchestra.run.vm07.stdout:Unpacking rbd-fuse (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T13:40:33.678 INFO:teuthology.orchestra.run.vm07.stdout:Selecting previously unselected package smartmontools. 2026-03-10T13:40:33.683 INFO:teuthology.orchestra.run.vm07.stdout:Preparing to unpack .../108-smartmontools_7.2-1ubuntu0.1_amd64.deb ... 2026-03-10T13:40:33.692 INFO:teuthology.orchestra.run.vm07.stdout:Unpacking smartmontools (7.2-1ubuntu0.1) ... 2026-03-10T13:40:33.741 INFO:teuthology.orchestra.run.vm07.stdout:Setting up smartmontools (7.2-1ubuntu0.1) ... 2026-03-10T13:40:34.003 INFO:teuthology.orchestra.run.vm07.stdout:Created symlink /etc/systemd/system/smartd.service → /lib/systemd/system/smartmontools.service. 2026-03-10T13:40:34.003 INFO:teuthology.orchestra.run.vm07.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/smartmontools.service → /lib/systemd/system/smartmontools.service. 2026-03-10T13:40:34.327 INFO:teuthology.orchestra.run.vm08.stdout:Get:104 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 ceph-mgr-k8sevents all 19.2.3-678-ge911bdeb-1jammy [14.3 kB] 2026-03-10T13:40:34.341 INFO:teuthology.orchestra.run.vm08.stdout:Get:105 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 ceph-test amd64 19.2.3-678-ge911bdeb-1jammy [52.1 MB] 2026-03-10T13:40:34.377 INFO:teuthology.orchestra.run.vm07.stdout:Setting up python3-iniconfig (1.1.1-2) ... 2026-03-10T13:40:34.446 INFO:teuthology.orchestra.run.vm07.stdout:Setting up libdouble-conversion3:amd64 (3.1.7-4) ... 2026-03-10T13:40:34.448 INFO:teuthology.orchestra.run.vm07.stdout:Setting up nvme-cli (1.16-3ubuntu0.3) ... 2026-03-10T13:40:34.511 INFO:teuthology.orchestra.run.vm07.stdout:Created symlink /etc/systemd/system/default.target.wants/nvmefc-boot-connections.service → /lib/systemd/system/nvmefc-boot-connections.service. 2026-03-10T13:40:34.742 INFO:teuthology.orchestra.run.vm07.stdout:Created symlink /etc/systemd/system/default.target.wants/nvmf-autoconnect.service → /lib/systemd/system/nvmf-autoconnect.service. 2026-03-10T13:40:35.135 INFO:teuthology.orchestra.run.vm07.stdout:nvmf-connect.target is a disabled or a static unit, not starting it. 2026-03-10T13:40:35.141 INFO:teuthology.orchestra.run.vm07.stdout:Could not execute systemctl: at /usr/bin/deb-systemd-invoke line 142. 2026-03-10T13:40:35.143 INFO:teuthology.orchestra.run.vm07.stdout:Setting up cephadm (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T13:40:35.191 INFO:teuthology.orchestra.run.vm07.stdout:Adding system user cephadm....done 2026-03-10T13:40:35.203 INFO:teuthology.orchestra.run.vm07.stdout:Setting up python3-waitress (1.4.4-1.1ubuntu1.1) ... 2026-03-10T13:40:35.286 INFO:teuthology.orchestra.run.vm07.stdout:Setting up python3-jaraco.classes (3.2.1-3) ... 2026-03-10T13:40:35.353 INFO:teuthology.orchestra.run.vm07.stdout:Setting up python-asyncssh-doc (2.5.0-1ubuntu0.1) ... 2026-03-10T13:40:35.356 INFO:teuthology.orchestra.run.vm07.stdout:Setting up python3-jaraco.functools (3.4.0-2) ... 2026-03-10T13:40:35.435 INFO:teuthology.orchestra.run.vm07.stdout:Setting up python3-repoze.lru (0.7-2) ... 2026-03-10T13:40:35.517 INFO:teuthology.orchestra.run.vm07.stdout:Setting up liboath0:amd64 (2.6.7-3ubuntu0.1) ... 2026-03-10T13:40:35.519 INFO:teuthology.orchestra.run.vm07.stdout:Setting up python3-py (1.10.0-1) ... 2026-03-10T13:40:35.623 INFO:teuthology.orchestra.run.vm07.stdout:Setting up python3-joblib (0.17.0-4ubuntu1) ... 2026-03-10T13:40:35.748 INFO:teuthology.orchestra.run.vm07.stdout:Setting up python3-cachetools (5.0.0-1) ... 2026-03-10T13:40:35.824 INFO:teuthology.orchestra.run.vm07.stdout:Setting up unzip (6.0-26ubuntu3.2) ... 2026-03-10T13:40:35.833 INFO:teuthology.orchestra.run.vm07.stdout:Setting up python3-pyinotify (0.9.6-1.3) ... 2026-03-10T13:40:35.905 INFO:teuthology.orchestra.run.vm07.stdout:Setting up python3-threadpoolctl (3.1.0-1) ... 2026-03-10T13:40:35.979 INFO:teuthology.orchestra.run.vm07.stdout:Setting up python3-ceph-argparse (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T13:40:36.061 INFO:teuthology.orchestra.run.vm07.stdout:Setting up python3-sklearn-lib:amd64 (0.23.2-5ubuntu6) ... 2026-03-10T13:40:36.064 INFO:teuthology.orchestra.run.vm07.stdout:Setting up libnbd0 (1.10.5-1) ... 2026-03-10T13:40:36.066 INFO:teuthology.orchestra.run.vm07.stdout:Setting up lua-socket:amd64 (3.0~rc1+git+ac3201d-6) ... 2026-03-10T13:40:36.069 INFO:teuthology.orchestra.run.vm07.stdout:Setting up libreadline-dev:amd64 (8.1.2-1) ... 2026-03-10T13:40:36.072 INFO:teuthology.orchestra.run.vm07.stdout:Setting up libfuse2:amd64 (2.9.9-5ubuntu3) ... 2026-03-10T13:40:36.075 INFO:teuthology.orchestra.run.vm07.stdout:Setting up lua5.1 (5.1.5-8.1build4) ... 2026-03-10T13:40:36.080 INFO:teuthology.orchestra.run.vm07.stdout:update-alternatives: using /usr/bin/lua5.1 to provide /usr/bin/lua (lua-interpreter) in auto mode 2026-03-10T13:40:36.082 INFO:teuthology.orchestra.run.vm07.stdout:update-alternatives: using /usr/bin/luac5.1 to provide /usr/bin/luac (lua-compiler) in auto mode 2026-03-10T13:40:36.084 INFO:teuthology.orchestra.run.vm07.stdout:Setting up libpcre2-16-0:amd64 (10.39-3ubuntu0.1) ... 2026-03-10T13:40:36.087 INFO:teuthology.orchestra.run.vm07.stdout:Setting up python3-psutil (5.9.0-1build1) ... 2026-03-10T13:40:36.235 INFO:teuthology.orchestra.run.vm07.stdout:Setting up python3-natsort (8.0.2-1) ... 2026-03-10T13:40:36.314 INFO:teuthology.orchestra.run.vm07.stdout:Setting up python3-routes (2.5.1-1ubuntu1) ... 2026-03-10T13:40:36.401 INFO:teuthology.orchestra.run.vm07.stdout:Setting up python3-simplejson (3.17.6-1build1) ... 2026-03-10T13:40:36.487 INFO:teuthology.orchestra.run.vm07.stdout:Setting up zip (3.0-12build2) ... 2026-03-10T13:40:36.489 INFO:teuthology.orchestra.run.vm07.stdout:Setting up python3-pygments (2.11.2+dfsg-2ubuntu0.1) ... 2026-03-10T13:40:36.743 INFO:teuthology.orchestra.run.vm00.stdout:Get:106 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 ceph-volume all 19.2.3-678-ge911bdeb-1jammy [135 kB] 2026-03-10T13:40:36.790 INFO:teuthology.orchestra.run.vm00.stdout:Get:107 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 libcephfs-dev amd64 19.2.3-678-ge911bdeb-1jammy [41.0 kB] 2026-03-10T13:40:36.793 INFO:teuthology.orchestra.run.vm07.stdout:Setting up python3-tempita (0.5.2-6ubuntu1) ... 2026-03-10T13:40:36.797 INFO:teuthology.orchestra.run.vm00.stdout:Get:108 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 radosgw amd64 19.2.3-678-ge911bdeb-1jammy [13.7 MB] 2026-03-10T13:40:36.867 INFO:teuthology.orchestra.run.vm07.stdout:Setting up python-pastedeploy-tpl (2.1.1-1) ... 2026-03-10T13:40:36.870 INFO:teuthology.orchestra.run.vm07.stdout:Setting up qttranslations5-l10n (5.15.3-1) ... 2026-03-10T13:40:36.872 INFO:teuthology.orchestra.run.vm07.stdout:Setting up python3-wcwidth (0.2.5+dfsg1-1) ... 2026-03-10T13:40:36.970 INFO:teuthology.orchestra.run.vm07.stdout:Setting up python3-asyncssh (2.5.0-1ubuntu0.1) ... 2026-03-10T13:40:37.114 INFO:teuthology.orchestra.run.vm07.stdout:Setting up python3-paste (3.5.0+dfsg1-1) ... 2026-03-10T13:40:37.260 INFO:teuthology.orchestra.run.vm07.stdout:Setting up python3-cheroot (8.5.2+ds1-1ubuntu3.1) ... 2026-03-10T13:40:37.354 INFO:teuthology.orchestra.run.vm07.stdout:Setting up python3-werkzeug (2.0.2+dfsg1-1ubuntu0.22.04.3) ... 2026-03-10T13:40:37.475 INFO:teuthology.orchestra.run.vm07.stdout:Setting up python3-jaraco.text (3.6.0-2) ... 2026-03-10T13:40:37.546 INFO:teuthology.orchestra.run.vm07.stdout:Setting up socat (1.7.4.1-3ubuntu4) ... 2026-03-10T13:40:37.548 INFO:teuthology.orchestra.run.vm07.stdout:Setting up python3-ceph-common (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T13:40:37.639 INFO:teuthology.orchestra.run.vm07.stdout:Setting up python3-sklearn (0.23.2-5ubuntu6) ... 2026-03-10T13:40:38.235 INFO:teuthology.orchestra.run.vm07.stdout:Setting up pkg-config (0.29.2-1ubuntu3) ... 2026-03-10T13:40:38.258 INFO:teuthology.orchestra.run.vm07.stdout:Setting up libqt5core5a:amd64 (5.15.3+dfsg-2ubuntu0.2) ... 2026-03-10T13:40:38.263 INFO:teuthology.orchestra.run.vm07.stdout:Setting up python3-toml (0.10.2-1) ... 2026-03-10T13:40:38.343 INFO:teuthology.orchestra.run.vm07.stdout:Setting up librdkafka1:amd64 (1.8.0-1build1) ... 2026-03-10T13:40:38.346 INFO:teuthology.orchestra.run.vm07.stdout:Setting up xmlstarlet (1.6.1-2.1) ... 2026-03-10T13:40:38.349 INFO:teuthology.orchestra.run.vm07.stdout:Setting up python3-pluggy (0.13.0-7.1) ... 2026-03-10T13:40:38.379 INFO:teuthology.orchestra.run.vm00.stdout:Get:109 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 rbd-fuse amd64 19.2.3-678-ge911bdeb-1jammy [92.2 kB] 2026-03-10T13:40:38.448 INFO:teuthology.orchestra.run.vm07.stdout:Setting up python3-zc.lockfile (2.0-1) ... 2026-03-10T13:40:38.516 INFO:teuthology.orchestra.run.vm07.stdout:Setting up libqt5dbus5:amd64 (5.15.3+dfsg-2ubuntu0.2) ... 2026-03-10T13:40:38.519 INFO:teuthology.orchestra.run.vm07.stdout:Setting up python3-rsa (4.8-1) ... 2026-03-10T13:40:38.637 INFO:teuthology.orchestra.run.vm07.stdout:Setting up python3-singledispatch (3.4.0.3-3) ... 2026-03-10T13:40:38.709 INFO:teuthology.orchestra.run.vm07.stdout:Setting up python3-logutils (0.3.3-8) ... 2026-03-10T13:40:38.730 INFO:teuthology.orchestra.run.vm00.stdout:Fetched 178 MB in 20s (8983 kB/s) 2026-03-10T13:40:38.880 INFO:teuthology.orchestra.run.vm07.stdout:Setting up python3-tempora (4.1.2-1) ... 2026-03-10T13:40:38.885 INFO:teuthology.orchestra.run.vm00.stdout:Selecting previously unselected package liblttng-ust1:amd64. 2026-03-10T13:40:38.911 INFO:teuthology.orchestra.run.vm00.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 111717 files and directories currently installed.) 2026-03-10T13:40:38.912 INFO:teuthology.orchestra.run.vm00.stdout:Preparing to unpack .../000-liblttng-ust1_2.13.1-1ubuntu1_amd64.deb ... 2026-03-10T13:40:38.915 INFO:teuthology.orchestra.run.vm00.stdout:Unpacking liblttng-ust1:amd64 (2.13.1-1ubuntu1) ... 2026-03-10T13:40:38.935 INFO:teuthology.orchestra.run.vm00.stdout:Selecting previously unselected package libdouble-conversion3:amd64. 2026-03-10T13:40:38.939 INFO:teuthology.orchestra.run.vm00.stdout:Preparing to unpack .../001-libdouble-conversion3_3.1.7-4_amd64.deb ... 2026-03-10T13:40:38.939 INFO:teuthology.orchestra.run.vm00.stdout:Unpacking libdouble-conversion3:amd64 (3.1.7-4) ... 2026-03-10T13:40:38.954 INFO:teuthology.orchestra.run.vm00.stdout:Selecting previously unselected package libpcre2-16-0:amd64. 2026-03-10T13:40:38.956 INFO:teuthology.orchestra.run.vm07.stdout:Setting up python3-simplegeneric (0.8.1-3) ... 2026-03-10T13:40:38.958 INFO:teuthology.orchestra.run.vm00.stdout:Preparing to unpack .../002-libpcre2-16-0_10.39-3ubuntu0.1_amd64.deb ... 2026-03-10T13:40:38.959 INFO:teuthology.orchestra.run.vm00.stdout:Unpacking libpcre2-16-0:amd64 (10.39-3ubuntu0.1) ... 2026-03-10T13:40:38.979 INFO:teuthology.orchestra.run.vm00.stdout:Selecting previously unselected package libqt5core5a:amd64. 2026-03-10T13:40:38.983 INFO:teuthology.orchestra.run.vm00.stdout:Preparing to unpack .../003-libqt5core5a_5.15.3+dfsg-2ubuntu0.2_amd64.deb ... 2026-03-10T13:40:38.987 INFO:teuthology.orchestra.run.vm00.stdout:Unpacking libqt5core5a:amd64 (5.15.3+dfsg-2ubuntu0.2) ... 2026-03-10T13:40:39.023 INFO:teuthology.orchestra.run.vm07.stdout:Setting up python3-prettytable (2.5.0-2) ... 2026-03-10T13:40:39.027 INFO:teuthology.orchestra.run.vm00.stdout:Selecting previously unselected package libqt5dbus5:amd64. 2026-03-10T13:40:39.032 INFO:teuthology.orchestra.run.vm00.stdout:Preparing to unpack .../004-libqt5dbus5_5.15.3+dfsg-2ubuntu0.2_amd64.deb ... 2026-03-10T13:40:39.033 INFO:teuthology.orchestra.run.vm00.stdout:Unpacking libqt5dbus5:amd64 (5.15.3+dfsg-2ubuntu0.2) ... 2026-03-10T13:40:39.052 INFO:teuthology.orchestra.run.vm00.stdout:Selecting previously unselected package libqt5network5:amd64. 2026-03-10T13:40:39.057 INFO:teuthology.orchestra.run.vm00.stdout:Preparing to unpack .../005-libqt5network5_5.15.3+dfsg-2ubuntu0.2_amd64.deb ... 2026-03-10T13:40:39.058 INFO:teuthology.orchestra.run.vm00.stdout:Unpacking libqt5network5:amd64 (5.15.3+dfsg-2ubuntu0.2) ... 2026-03-10T13:40:39.087 INFO:teuthology.orchestra.run.vm00.stdout:Selecting previously unselected package libthrift-0.16.0:amd64. 2026-03-10T13:40:39.092 INFO:teuthology.orchestra.run.vm00.stdout:Preparing to unpack .../006-libthrift-0.16.0_0.16.0-2_amd64.deb ... 2026-03-10T13:40:39.093 INFO:teuthology.orchestra.run.vm00.stdout:Unpacking libthrift-0.16.0:amd64 (0.16.0-2) ... 2026-03-10T13:40:39.097 INFO:teuthology.orchestra.run.vm07.stdout:Setting up liblttng-ust1:amd64 (2.13.1-1ubuntu1) ... 2026-03-10T13:40:39.101 INFO:teuthology.orchestra.run.vm07.stdout:Setting up python3-websocket (1.2.3-1) ... 2026-03-10T13:40:39.120 INFO:teuthology.orchestra.run.vm00.stdout:Preparing to unpack .../007-librbd1_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-10T13:40:39.123 INFO:teuthology.orchestra.run.vm00.stdout:Unpacking librbd1 (19.2.3-678-ge911bdeb-1jammy) over (17.2.9-0ubuntu0.22.04.2) ... 2026-03-10T13:40:39.184 INFO:teuthology.orchestra.run.vm07.stdout:Setting up libonig5:amd64 (6.9.7.1-2build1) ... 2026-03-10T13:40:39.187 INFO:teuthology.orchestra.run.vm07.stdout:Setting up python3-requests-oauthlib (1.3.0+ds-0.1) ... 2026-03-10T13:40:39.203 INFO:teuthology.orchestra.run.vm00.stdout:Preparing to unpack .../008-librados2_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-10T13:40:39.206 INFO:teuthology.orchestra.run.vm00.stdout:Unpacking librados2 (19.2.3-678-ge911bdeb-1jammy) over (17.2.9-0ubuntu0.22.04.2) ... 2026-03-10T13:40:39.260 INFO:teuthology.orchestra.run.vm07.stdout:Setting up python3-mako (1.1.3+ds1-2ubuntu0.1) ... 2026-03-10T13:40:39.279 INFO:teuthology.orchestra.run.vm00.stdout:Selecting previously unselected package libnbd0. 2026-03-10T13:40:39.285 INFO:teuthology.orchestra.run.vm00.stdout:Preparing to unpack .../009-libnbd0_1.10.5-1_amd64.deb ... 2026-03-10T13:40:39.286 INFO:teuthology.orchestra.run.vm00.stdout:Unpacking libnbd0 (1.10.5-1) ... 2026-03-10T13:40:39.304 INFO:teuthology.orchestra.run.vm00.stdout:Selecting previously unselected package libcephfs2. 2026-03-10T13:40:39.310 INFO:teuthology.orchestra.run.vm00.stdout:Preparing to unpack .../010-libcephfs2_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-10T13:40:39.311 INFO:teuthology.orchestra.run.vm00.stdout:Unpacking libcephfs2 (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T13:40:39.337 INFO:teuthology.orchestra.run.vm00.stdout:Selecting previously unselected package python3-rados. 2026-03-10T13:40:39.343 INFO:teuthology.orchestra.run.vm00.stdout:Preparing to unpack .../011-python3-rados_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-10T13:40:39.344 INFO:teuthology.orchestra.run.vm00.stdout:Unpacking python3-rados (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T13:40:39.350 INFO:teuthology.orchestra.run.vm07.stdout:Setting up python3-webob (1:1.8.6-1.1ubuntu0.1) ... 2026-03-10T13:40:39.366 INFO:teuthology.orchestra.run.vm00.stdout:Selecting previously unselected package python3-ceph-argparse. 2026-03-10T13:40:39.372 INFO:teuthology.orchestra.run.vm00.stdout:Preparing to unpack .../012-python3-ceph-argparse_19.2.3-678-ge911bdeb-1jammy_all.deb ... 2026-03-10T13:40:39.372 INFO:teuthology.orchestra.run.vm00.stdout:Unpacking python3-ceph-argparse (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T13:40:39.388 INFO:teuthology.orchestra.run.vm00.stdout:Selecting previously unselected package python3-cephfs. 2026-03-10T13:40:39.394 INFO:teuthology.orchestra.run.vm00.stdout:Preparing to unpack .../013-python3-cephfs_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-10T13:40:39.395 INFO:teuthology.orchestra.run.vm00.stdout:Unpacking python3-cephfs (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T13:40:39.417 INFO:teuthology.orchestra.run.vm00.stdout:Selecting previously unselected package python3-ceph-common. 2026-03-10T13:40:39.421 INFO:teuthology.orchestra.run.vm00.stdout:Preparing to unpack .../014-python3-ceph-common_19.2.3-678-ge911bdeb-1jammy_all.deb ... 2026-03-10T13:40:39.422 INFO:teuthology.orchestra.run.vm00.stdout:Unpacking python3-ceph-common (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T13:40:39.426 INFO:teuthology.orchestra.run.vm08.stdout:Get:106 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 ceph-volume all 19.2.3-678-ge911bdeb-1jammy [135 kB] 2026-03-10T13:40:39.430 INFO:teuthology.orchestra.run.vm08.stdout:Get:107 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 libcephfs-dev amd64 19.2.3-678-ge911bdeb-1jammy [41.0 kB] 2026-03-10T13:40:39.445 INFO:teuthology.orchestra.run.vm00.stdout:Selecting previously unselected package python3-wcwidth. 2026-03-10T13:40:39.448 INFO:teuthology.orchestra.run.vm07.stdout:Setting up python3-jaraco.collections (3.4.0-2) ... 2026-03-10T13:40:39.452 INFO:teuthology.orchestra.run.vm00.stdout:Preparing to unpack .../015-python3-wcwidth_0.2.5+dfsg1-1_all.deb ... 2026-03-10T13:40:39.453 INFO:teuthology.orchestra.run.vm00.stdout:Unpacking python3-wcwidth (0.2.5+dfsg1-1) ... 2026-03-10T13:40:39.455 INFO:teuthology.orchestra.run.vm08.stdout:Get:108 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 radosgw amd64 19.2.3-678-ge911bdeb-1jammy [13.7 MB] 2026-03-10T13:40:39.471 INFO:teuthology.orchestra.run.vm00.stdout:Selecting previously unselected package python3-prettytable. 2026-03-10T13:40:39.476 INFO:teuthology.orchestra.run.vm00.stdout:Preparing to unpack .../016-python3-prettytable_2.5.0-2_all.deb ... 2026-03-10T13:40:39.477 INFO:teuthology.orchestra.run.vm00.stdout:Unpacking python3-prettytable (2.5.0-2) ... 2026-03-10T13:40:39.492 INFO:teuthology.orchestra.run.vm00.stdout:Selecting previously unselected package python3-rbd. 2026-03-10T13:40:39.498 INFO:teuthology.orchestra.run.vm00.stdout:Preparing to unpack .../017-python3-rbd_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-10T13:40:39.499 INFO:teuthology.orchestra.run.vm00.stdout:Unpacking python3-rbd (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T13:40:39.518 INFO:teuthology.orchestra.run.vm07.stdout:Setting up liblua5.3-dev:amd64 (5.3.6-1build1) ... 2026-03-10T13:40:39.519 INFO:teuthology.orchestra.run.vm00.stdout:Selecting previously unselected package librdkafka1:amd64. 2026-03-10T13:40:39.520 INFO:teuthology.orchestra.run.vm07.stdout:Setting up lua-sec:amd64 (1.0.2-1) ... 2026-03-10T13:40:39.522 INFO:teuthology.orchestra.run.vm07.stdout:Setting up libjq1:amd64 (1.6-2.1ubuntu3.1) ... 2026-03-10T13:40:39.524 INFO:teuthology.orchestra.run.vm00.stdout:Preparing to unpack .../018-librdkafka1_1.8.0-1build1_amd64.deb ... 2026-03-10T13:40:39.525 INFO:teuthology.orchestra.run.vm07.stdout:Setting up python3-pytest (6.2.5-1ubuntu2) ... 2026-03-10T13:40:39.525 INFO:teuthology.orchestra.run.vm00.stdout:Unpacking librdkafka1:amd64 (1.8.0-1build1) ... 2026-03-10T13:40:39.549 INFO:teuthology.orchestra.run.vm00.stdout:Selecting previously unselected package libreadline-dev:amd64. 2026-03-10T13:40:39.555 INFO:teuthology.orchestra.run.vm00.stdout:Preparing to unpack .../019-libreadline-dev_8.1.2-1_amd64.deb ... 2026-03-10T13:40:39.556 INFO:teuthology.orchestra.run.vm00.stdout:Unpacking libreadline-dev:amd64 (8.1.2-1) ... 2026-03-10T13:40:39.577 INFO:teuthology.orchestra.run.vm00.stdout:Selecting previously unselected package liblua5.3-dev:amd64. 2026-03-10T13:40:39.584 INFO:teuthology.orchestra.run.vm00.stdout:Preparing to unpack .../020-liblua5.3-dev_5.3.6-1build1_amd64.deb ... 2026-03-10T13:40:39.585 INFO:teuthology.orchestra.run.vm00.stdout:Unpacking liblua5.3-dev:amd64 (5.3.6-1build1) ... 2026-03-10T13:40:39.607 INFO:teuthology.orchestra.run.vm00.stdout:Selecting previously unselected package lua5.1. 2026-03-10T13:40:39.612 INFO:teuthology.orchestra.run.vm00.stdout:Preparing to unpack .../021-lua5.1_5.1.5-8.1build4_amd64.deb ... 2026-03-10T13:40:39.613 INFO:teuthology.orchestra.run.vm00.stdout:Unpacking lua5.1 (5.1.5-8.1build4) ... 2026-03-10T13:40:39.633 INFO:teuthology.orchestra.run.vm00.stdout:Selecting previously unselected package lua-any. 2026-03-10T13:40:39.638 INFO:teuthology.orchestra.run.vm00.stdout:Preparing to unpack .../022-lua-any_27ubuntu1_all.deb ... 2026-03-10T13:40:39.638 INFO:teuthology.orchestra.run.vm00.stdout:Unpacking lua-any (27ubuntu1) ... 2026-03-10T13:40:39.650 INFO:teuthology.orchestra.run.vm00.stdout:Selecting previously unselected package zip. 2026-03-10T13:40:39.655 INFO:teuthology.orchestra.run.vm00.stdout:Preparing to unpack .../023-zip_3.0-12build2_amd64.deb ... 2026-03-10T13:40:39.656 INFO:teuthology.orchestra.run.vm00.stdout:Unpacking zip (3.0-12build2) ... 2026-03-10T13:40:39.669 INFO:teuthology.orchestra.run.vm07.stdout:Setting up python3-pastedeploy (2.1.1-1) ... 2026-03-10T13:40:39.671 INFO:teuthology.orchestra.run.vm00.stdout:Selecting previously unselected package unzip. 2026-03-10T13:40:39.676 INFO:teuthology.orchestra.run.vm00.stdout:Preparing to unpack .../024-unzip_6.0-26ubuntu3.2_amd64.deb ... 2026-03-10T13:40:39.677 INFO:teuthology.orchestra.run.vm00.stdout:Unpacking unzip (6.0-26ubuntu3.2) ... 2026-03-10T13:40:39.697 INFO:teuthology.orchestra.run.vm00.stdout:Selecting previously unselected package luarocks. 2026-03-10T13:40:39.702 INFO:teuthology.orchestra.run.vm00.stdout:Preparing to unpack .../025-luarocks_3.8.0+dfsg1-1_all.deb ... 2026-03-10T13:40:39.703 INFO:teuthology.orchestra.run.vm00.stdout:Unpacking luarocks (3.8.0+dfsg1-1) ... 2026-03-10T13:40:39.747 INFO:teuthology.orchestra.run.vm07.stdout:Setting up lua-any (27ubuntu1) ... 2026-03-10T13:40:39.749 INFO:teuthology.orchestra.run.vm07.stdout:Setting up python3-portend (3.0.0-1) ... 2026-03-10T13:40:39.753 INFO:teuthology.orchestra.run.vm00.stdout:Selecting previously unselected package librgw2. 2026-03-10T13:40:39.759 INFO:teuthology.orchestra.run.vm00.stdout:Preparing to unpack .../026-librgw2_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-10T13:40:39.760 INFO:teuthology.orchestra.run.vm00.stdout:Unpacking librgw2 (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T13:40:39.824 INFO:teuthology.orchestra.run.vm07.stdout:Setting up libqt5network5:amd64 (5.15.3+dfsg-2ubuntu0.2) ... 2026-03-10T13:40:39.827 INFO:teuthology.orchestra.run.vm07.stdout:Setting up python3-google-auth (1.5.1-3) ... 2026-03-10T13:40:39.885 INFO:teuthology.orchestra.run.vm00.stdout:Selecting previously unselected package python3-rgw. 2026-03-10T13:40:39.891 INFO:teuthology.orchestra.run.vm00.stdout:Preparing to unpack .../027-python3-rgw_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-10T13:40:39.892 INFO:teuthology.orchestra.run.vm00.stdout:Unpacking python3-rgw (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T13:40:39.908 INFO:teuthology.orchestra.run.vm07.stdout:Setting up jq (1.6-2.1ubuntu3.1) ... 2026-03-10T13:40:39.909 INFO:teuthology.orchestra.run.vm00.stdout:Selecting previously unselected package liboath0:amd64. 2026-03-10T13:40:39.911 INFO:teuthology.orchestra.run.vm07.stdout:Setting up python3-webtest (2.0.35-1) ... 2026-03-10T13:40:39.915 INFO:teuthology.orchestra.run.vm00.stdout:Preparing to unpack .../028-liboath0_2.6.7-3ubuntu0.1_amd64.deb ... 2026-03-10T13:40:39.916 INFO:teuthology.orchestra.run.vm00.stdout:Unpacking liboath0:amd64 (2.6.7-3ubuntu0.1) ... 2026-03-10T13:40:39.932 INFO:teuthology.orchestra.run.vm00.stdout:Selecting previously unselected package libradosstriper1. 2026-03-10T13:40:39.937 INFO:teuthology.orchestra.run.vm00.stdout:Preparing to unpack .../029-libradosstriper1_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-10T13:40:39.938 INFO:teuthology.orchestra.run.vm00.stdout:Unpacking libradosstriper1 (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T13:40:39.963 INFO:teuthology.orchestra.run.vm00.stdout:Selecting previously unselected package ceph-common. 2026-03-10T13:40:39.969 INFO:teuthology.orchestra.run.vm00.stdout:Preparing to unpack .../030-ceph-common_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-10T13:40:39.970 INFO:teuthology.orchestra.run.vm00.stdout:Unpacking ceph-common (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T13:40:39.990 INFO:teuthology.orchestra.run.vm07.stdout:Setting up python3-cherrypy3 (18.6.1-4) ... 2026-03-10T13:40:40.136 INFO:teuthology.orchestra.run.vm07.stdout:Setting up python3-pastescript (2.0.2-4) ... 2026-03-10T13:40:40.235 INFO:teuthology.orchestra.run.vm07.stdout:Setting up python3-pecan (1.3.3-4ubuntu2) ... 2026-03-10T13:40:40.343 INFO:teuthology.orchestra.run.vm08.stdout:Get:109 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 rbd-fuse amd64 19.2.3-678-ge911bdeb-1jammy [92.2 kB] 2026-03-10T13:40:40.450 INFO:teuthology.orchestra.run.vm07.stdout:Setting up libthrift-0.16.0:amd64 (0.16.0-2) ... 2026-03-10T13:40:40.506 INFO:teuthology.orchestra.run.vm07.stdout:Setting up librados2 (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T13:40:40.509 INFO:teuthology.orchestra.run.vm07.stdout:Setting up libsqlite3-mod-ceph (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T13:40:40.523 INFO:teuthology.orchestra.run.vm07.stdout:Setting up python3-kubernetes (12.0.1-1ubuntu1) ... 2026-03-10T13:40:40.527 INFO:teuthology.orchestra.run.vm00.stdout:Selecting previously unselected package ceph-base. 2026-03-10T13:40:40.533 INFO:teuthology.orchestra.run.vm00.stdout:Preparing to unpack .../031-ceph-base_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-10T13:40:40.537 INFO:teuthology.orchestra.run.vm00.stdout:Unpacking ceph-base (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T13:40:40.668 INFO:teuthology.orchestra.run.vm08.stdout:Fetched 178 MB in 22s (8238 kB/s) 2026-03-10T13:40:40.672 INFO:teuthology.orchestra.run.vm00.stdout:Selecting previously unselected package python3-jaraco.functools. 2026-03-10T13:40:40.679 INFO:teuthology.orchestra.run.vm00.stdout:Preparing to unpack .../032-python3-jaraco.functools_3.4.0-2_all.deb ... 2026-03-10T13:40:40.680 INFO:teuthology.orchestra.run.vm00.stdout:Unpacking python3-jaraco.functools (3.4.0-2) ... 2026-03-10T13:40:40.695 INFO:teuthology.orchestra.run.vm08.stdout:Selecting previously unselected package liblttng-ust1:amd64. 2026-03-10T13:40:40.705 INFO:teuthology.orchestra.run.vm00.stdout:Selecting previously unselected package python3-cheroot. 2026-03-10T13:40:40.711 INFO:teuthology.orchestra.run.vm00.stdout:Preparing to unpack .../033-python3-cheroot_8.5.2+ds1-1ubuntu3.1_all.deb ... 2026-03-10T13:40:40.712 INFO:teuthology.orchestra.run.vm00.stdout:Unpacking python3-cheroot (8.5.2+ds1-1ubuntu3.1) ... 2026-03-10T13:40:40.724 INFO:teuthology.orchestra.run.vm08.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 111717 files and directories currently installed.) 2026-03-10T13:40:40.726 INFO:teuthology.orchestra.run.vm08.stdout:Preparing to unpack .../000-liblttng-ust1_2.13.1-1ubuntu1_amd64.deb ... 2026-03-10T13:40:40.728 INFO:teuthology.orchestra.run.vm08.stdout:Unpacking liblttng-ust1:amd64 (2.13.1-1ubuntu1) ... 2026-03-10T13:40:40.733 INFO:teuthology.orchestra.run.vm00.stdout:Selecting previously unselected package python3-jaraco.classes. 2026-03-10T13:40:40.739 INFO:teuthology.orchestra.run.vm00.stdout:Preparing to unpack .../034-python3-jaraco.classes_3.2.1-3_all.deb ... 2026-03-10T13:40:40.740 INFO:teuthology.orchestra.run.vm00.stdout:Unpacking python3-jaraco.classes (3.2.1-3) ... 2026-03-10T13:40:40.747 INFO:teuthology.orchestra.run.vm08.stdout:Selecting previously unselected package libdouble-conversion3:amd64. 2026-03-10T13:40:40.752 INFO:teuthology.orchestra.run.vm08.stdout:Preparing to unpack .../001-libdouble-conversion3_3.1.7-4_amd64.deb ... 2026-03-10T13:40:40.753 INFO:teuthology.orchestra.run.vm08.stdout:Unpacking libdouble-conversion3:amd64 (3.1.7-4) ... 2026-03-10T13:40:40.756 INFO:teuthology.orchestra.run.vm00.stdout:Selecting previously unselected package python3-jaraco.text. 2026-03-10T13:40:40.762 INFO:teuthology.orchestra.run.vm00.stdout:Preparing to unpack .../035-python3-jaraco.text_3.6.0-2_all.deb ... 2026-03-10T13:40:40.763 INFO:teuthology.orchestra.run.vm00.stdout:Unpacking python3-jaraco.text (3.6.0-2) ... 2026-03-10T13:40:40.767 INFO:teuthology.orchestra.run.vm08.stdout:Selecting previously unselected package libpcre2-16-0:amd64. 2026-03-10T13:40:40.772 INFO:teuthology.orchestra.run.vm08.stdout:Preparing to unpack .../002-libpcre2-16-0_10.39-3ubuntu0.1_amd64.deb ... 2026-03-10T13:40:40.772 INFO:teuthology.orchestra.run.vm08.stdout:Unpacking libpcre2-16-0:amd64 (10.39-3ubuntu0.1) ... 2026-03-10T13:40:40.780 INFO:teuthology.orchestra.run.vm00.stdout:Selecting previously unselected package python3-jaraco.collections. 2026-03-10T13:40:40.787 INFO:teuthology.orchestra.run.vm00.stdout:Preparing to unpack .../036-python3-jaraco.collections_3.4.0-2_all.deb ... 2026-03-10T13:40:40.788 INFO:teuthology.orchestra.run.vm00.stdout:Unpacking python3-jaraco.collections (3.4.0-2) ... 2026-03-10T13:40:40.792 INFO:teuthology.orchestra.run.vm08.stdout:Selecting previously unselected package libqt5core5a:amd64. 2026-03-10T13:40:40.795 INFO:teuthology.orchestra.run.vm08.stdout:Preparing to unpack .../003-libqt5core5a_5.15.3+dfsg-2ubuntu0.2_amd64.deb ... 2026-03-10T13:40:40.799 INFO:teuthology.orchestra.run.vm08.stdout:Unpacking libqt5core5a:amd64 (5.15.3+dfsg-2ubuntu0.2) ... 2026-03-10T13:40:40.804 INFO:teuthology.orchestra.run.vm00.stdout:Selecting previously unselected package python3-tempora. 2026-03-10T13:40:40.810 INFO:teuthology.orchestra.run.vm00.stdout:Preparing to unpack .../037-python3-tempora_4.1.2-1_all.deb ... 2026-03-10T13:40:40.811 INFO:teuthology.orchestra.run.vm00.stdout:Unpacking python3-tempora (4.1.2-1) ... 2026-03-10T13:40:40.836 INFO:teuthology.orchestra.run.vm00.stdout:Selecting previously unselected package python3-portend. 2026-03-10T13:40:40.838 INFO:teuthology.orchestra.run.vm08.stdout:Selecting previously unselected package libqt5dbus5:amd64. 2026-03-10T13:40:40.841 INFO:teuthology.orchestra.run.vm00.stdout:Preparing to unpack .../038-python3-portend_3.0.0-1_all.deb ... 2026-03-10T13:40:40.842 INFO:teuthology.orchestra.run.vm00.stdout:Unpacking python3-portend (3.0.0-1) ... 2026-03-10T13:40:40.843 INFO:teuthology.orchestra.run.vm08.stdout:Preparing to unpack .../004-libqt5dbus5_5.15.3+dfsg-2ubuntu0.2_amd64.deb ... 2026-03-10T13:40:40.844 INFO:teuthology.orchestra.run.vm08.stdout:Unpacking libqt5dbus5:amd64 (5.15.3+dfsg-2ubuntu0.2) ... 2026-03-10T13:40:40.861 INFO:teuthology.orchestra.run.vm00.stdout:Selecting previously unselected package python3-zc.lockfile. 2026-03-10T13:40:40.865 INFO:teuthology.orchestra.run.vm08.stdout:Selecting previously unselected package libqt5network5:amd64. 2026-03-10T13:40:40.867 INFO:teuthology.orchestra.run.vm00.stdout:Preparing to unpack .../039-python3-zc.lockfile_2.0-1_all.deb ... 2026-03-10T13:40:40.869 INFO:teuthology.orchestra.run.vm00.stdout:Unpacking python3-zc.lockfile (2.0-1) ... 2026-03-10T13:40:40.871 INFO:teuthology.orchestra.run.vm08.stdout:Preparing to unpack .../005-libqt5network5_5.15.3+dfsg-2ubuntu0.2_amd64.deb ... 2026-03-10T13:40:40.873 INFO:teuthology.orchestra.run.vm08.stdout:Unpacking libqt5network5:amd64 (5.15.3+dfsg-2ubuntu0.2) ... 2026-03-10T13:40:40.889 INFO:teuthology.orchestra.run.vm00.stdout:Selecting previously unselected package python3-cherrypy3. 2026-03-10T13:40:40.895 INFO:teuthology.orchestra.run.vm00.stdout:Preparing to unpack .../040-python3-cherrypy3_18.6.1-4_all.deb ... 2026-03-10T13:40:40.896 INFO:teuthology.orchestra.run.vm00.stdout:Unpacking python3-cherrypy3 (18.6.1-4) ... 2026-03-10T13:40:40.899 INFO:teuthology.orchestra.run.vm08.stdout:Selecting previously unselected package libthrift-0.16.0:amd64. 2026-03-10T13:40:40.904 INFO:teuthology.orchestra.run.vm08.stdout:Preparing to unpack .../006-libthrift-0.16.0_0.16.0-2_amd64.deb ... 2026-03-10T13:40:40.905 INFO:teuthology.orchestra.run.vm08.stdout:Unpacking libthrift-0.16.0:amd64 (0.16.0-2) ... 2026-03-10T13:40:40.928 INFO:teuthology.orchestra.run.vm00.stdout:Selecting previously unselected package python3-natsort. 2026-03-10T13:40:40.932 INFO:teuthology.orchestra.run.vm08.stdout:Preparing to unpack .../007-librbd1_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-10T13:40:40.934 INFO:teuthology.orchestra.run.vm00.stdout:Preparing to unpack .../041-python3-natsort_8.0.2-1_all.deb ... 2026-03-10T13:40:40.935 INFO:teuthology.orchestra.run.vm08.stdout:Unpacking librbd1 (19.2.3-678-ge911bdeb-1jammy) over (17.2.9-0ubuntu0.22.04.2) ... 2026-03-10T13:40:40.935 INFO:teuthology.orchestra.run.vm00.stdout:Unpacking python3-natsort (8.0.2-1) ... 2026-03-10T13:40:40.953 INFO:teuthology.orchestra.run.vm00.stdout:Selecting previously unselected package python3-logutils. 2026-03-10T13:40:40.958 INFO:teuthology.orchestra.run.vm00.stdout:Preparing to unpack .../042-python3-logutils_0.3.3-8_all.deb ... 2026-03-10T13:40:40.959 INFO:teuthology.orchestra.run.vm00.stdout:Unpacking python3-logutils (0.3.3-8) ... 2026-03-10T13:40:40.994 INFO:teuthology.orchestra.run.vm00.stdout:Selecting previously unselected package python3-mako. 2026-03-10T13:40:41.000 INFO:teuthology.orchestra.run.vm00.stdout:Preparing to unpack .../043-python3-mako_1.1.3+ds1-2ubuntu0.1_all.deb ... 2026-03-10T13:40:41.001 INFO:teuthology.orchestra.run.vm00.stdout:Unpacking python3-mako (1.1.3+ds1-2ubuntu0.1) ... 2026-03-10T13:40:41.015 INFO:teuthology.orchestra.run.vm08.stdout:Preparing to unpack .../008-librados2_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-10T13:40:41.018 INFO:teuthology.orchestra.run.vm08.stdout:Unpacking librados2 (19.2.3-678-ge911bdeb-1jammy) over (17.2.9-0ubuntu0.22.04.2) ... 2026-03-10T13:40:41.024 INFO:teuthology.orchestra.run.vm00.stdout:Selecting previously unselected package python3-simplegeneric. 2026-03-10T13:40:41.030 INFO:teuthology.orchestra.run.vm00.stdout:Preparing to unpack .../044-python3-simplegeneric_0.8.1-3_all.deb ... 2026-03-10T13:40:41.030 INFO:teuthology.orchestra.run.vm00.stdout:Unpacking python3-simplegeneric (0.8.1-3) ... 2026-03-10T13:40:41.048 INFO:teuthology.orchestra.run.vm00.stdout:Selecting previously unselected package python3-singledispatch. 2026-03-10T13:40:41.054 INFO:teuthology.orchestra.run.vm00.stdout:Preparing to unpack .../045-python3-singledispatch_3.4.0.3-3_all.deb ... 2026-03-10T13:40:41.070 INFO:teuthology.orchestra.run.vm00.stdout:Unpacking python3-singledispatch (3.4.0.3-3) ... 2026-03-10T13:40:41.085 INFO:teuthology.orchestra.run.vm00.stdout:Selecting previously unselected package python3-webob. 2026-03-10T13:40:41.088 INFO:teuthology.orchestra.run.vm08.stdout:Selecting previously unselected package libnbd0. 2026-03-10T13:40:41.090 INFO:teuthology.orchestra.run.vm00.stdout:Preparing to unpack .../046-python3-webob_1%3a1.8.6-1.1ubuntu0.1_all.deb ... 2026-03-10T13:40:41.091 INFO:teuthology.orchestra.run.vm00.stdout:Unpacking python3-webob (1:1.8.6-1.1ubuntu0.1) ... 2026-03-10T13:40:41.094 INFO:teuthology.orchestra.run.vm08.stdout:Preparing to unpack .../009-libnbd0_1.10.5-1_amd64.deb ... 2026-03-10T13:40:41.095 INFO:teuthology.orchestra.run.vm08.stdout:Unpacking libnbd0 (1.10.5-1) ... 2026-03-10T13:40:41.106 INFO:teuthology.orchestra.run.vm07.stdout:Setting up luarocks (3.8.0+dfsg1-1) ... 2026-03-10T13:40:41.111 INFO:teuthology.orchestra.run.vm08.stdout:Selecting previously unselected package libcephfs2. 2026-03-10T13:40:41.111 INFO:teuthology.orchestra.run.vm00.stdout:Selecting previously unselected package python3-waitress. 2026-03-10T13:40:41.114 INFO:teuthology.orchestra.run.vm07.stdout:Setting up libcephfs2 (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T13:40:41.116 INFO:teuthology.orchestra.run.vm08.stdout:Preparing to unpack .../010-libcephfs2_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-10T13:40:41.116 INFO:teuthology.orchestra.run.vm07.stdout:Setting up libradosstriper1 (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T13:40:41.117 INFO:teuthology.orchestra.run.vm00.stdout:Preparing to unpack .../047-python3-waitress_1.4.4-1.1ubuntu1.1_all.deb ... 2026-03-10T13:40:41.117 INFO:teuthology.orchestra.run.vm08.stdout:Unpacking libcephfs2 (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T13:40:41.119 INFO:teuthology.orchestra.run.vm07.stdout:Setting up librbd1 (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T13:40:41.119 INFO:teuthology.orchestra.run.vm00.stdout:Unpacking python3-waitress (1.4.4-1.1ubuntu1.1) ... 2026-03-10T13:40:41.121 INFO:teuthology.orchestra.run.vm07.stdout:Setting up ceph-mgr-modules-core (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T13:40:41.125 INFO:teuthology.orchestra.run.vm07.stdout:Setting up ceph-fuse (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T13:40:41.137 INFO:teuthology.orchestra.run.vm00.stdout:Selecting previously unselected package python3-tempita. 2026-03-10T13:40:41.143 INFO:teuthology.orchestra.run.vm00.stdout:Preparing to unpack .../048-python3-tempita_0.5.2-6ubuntu1_all.deb ... 2026-03-10T13:40:41.144 INFO:teuthology.orchestra.run.vm00.stdout:Unpacking python3-tempita (0.5.2-6ubuntu1) ... 2026-03-10T13:40:41.150 INFO:teuthology.orchestra.run.vm08.stdout:Selecting previously unselected package python3-rados. 2026-03-10T13:40:41.153 INFO:teuthology.orchestra.run.vm08.stdout:Preparing to unpack .../011-python3-rados_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-10T13:40:41.155 INFO:teuthology.orchestra.run.vm08.stdout:Unpacking python3-rados (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T13:40:41.158 INFO:teuthology.orchestra.run.vm00.stdout:Selecting previously unselected package python3-paste. 2026-03-10T13:40:41.164 INFO:teuthology.orchestra.run.vm00.stdout:Preparing to unpack .../049-python3-paste_3.5.0+dfsg1-1_all.deb ... 2026-03-10T13:40:41.165 INFO:teuthology.orchestra.run.vm00.stdout:Unpacking python3-paste (3.5.0+dfsg1-1) ... 2026-03-10T13:40:41.174 INFO:teuthology.orchestra.run.vm08.stdout:Selecting previously unselected package python3-ceph-argparse. 2026-03-10T13:40:41.179 INFO:teuthology.orchestra.run.vm08.stdout:Preparing to unpack .../012-python3-ceph-argparse_19.2.3-678-ge911bdeb-1jammy_all.deb ... 2026-03-10T13:40:41.180 INFO:teuthology.orchestra.run.vm08.stdout:Unpacking python3-ceph-argparse (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T13:40:41.186 INFO:teuthology.orchestra.run.vm07.stdout:Created symlink /etc/systemd/system/remote-fs.target.wants/ceph-fuse.target → /lib/systemd/system/ceph-fuse.target. 2026-03-10T13:40:41.186 INFO:teuthology.orchestra.run.vm07.stdout:Created symlink /etc/systemd/system/ceph.target.wants/ceph-fuse.target → /lib/systemd/system/ceph-fuse.target. 2026-03-10T13:40:41.196 INFO:teuthology.orchestra.run.vm08.stdout:Selecting previously unselected package python3-cephfs. 2026-03-10T13:40:41.200 INFO:teuthology.orchestra.run.vm00.stdout:Selecting previously unselected package python-pastedeploy-tpl. 2026-03-10T13:40:41.201 INFO:teuthology.orchestra.run.vm08.stdout:Preparing to unpack .../013-python3-cephfs_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-10T13:40:41.202 INFO:teuthology.orchestra.run.vm08.stdout:Unpacking python3-cephfs (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T13:40:41.207 INFO:teuthology.orchestra.run.vm00.stdout:Preparing to unpack .../050-python-pastedeploy-tpl_2.1.1-1_all.deb ... 2026-03-10T13:40:41.209 INFO:teuthology.orchestra.run.vm00.stdout:Unpacking python-pastedeploy-tpl (2.1.1-1) ... 2026-03-10T13:40:41.221 INFO:teuthology.orchestra.run.vm08.stdout:Selecting previously unselected package python3-ceph-common. 2026-03-10T13:40:41.225 INFO:teuthology.orchestra.run.vm00.stdout:Selecting previously unselected package python3-pastedeploy. 2026-03-10T13:40:41.227 INFO:teuthology.orchestra.run.vm08.stdout:Preparing to unpack .../014-python3-ceph-common_19.2.3-678-ge911bdeb-1jammy_all.deb ... 2026-03-10T13:40:41.228 INFO:teuthology.orchestra.run.vm08.stdout:Unpacking python3-ceph-common (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T13:40:41.231 INFO:teuthology.orchestra.run.vm00.stdout:Preparing to unpack .../051-python3-pastedeploy_2.1.1-1_all.deb ... 2026-03-10T13:40:41.232 INFO:teuthology.orchestra.run.vm00.stdout:Unpacking python3-pastedeploy (2.1.1-1) ... 2026-03-10T13:40:41.248 INFO:teuthology.orchestra.run.vm08.stdout:Selecting previously unselected package python3-wcwidth. 2026-03-10T13:40:41.250 INFO:teuthology.orchestra.run.vm00.stdout:Selecting previously unselected package python3-webtest. 2026-03-10T13:40:41.254 INFO:teuthology.orchestra.run.vm08.stdout:Preparing to unpack .../015-python3-wcwidth_0.2.5+dfsg1-1_all.deb ... 2026-03-10T13:40:41.255 INFO:teuthology.orchestra.run.vm08.stdout:Unpacking python3-wcwidth (0.2.5+dfsg1-1) ... 2026-03-10T13:40:41.258 INFO:teuthology.orchestra.run.vm00.stdout:Preparing to unpack .../052-python3-webtest_2.0.35-1_all.deb ... 2026-03-10T13:40:41.259 INFO:teuthology.orchestra.run.vm00.stdout:Unpacking python3-webtest (2.0.35-1) ... 2026-03-10T13:40:41.275 INFO:teuthology.orchestra.run.vm08.stdout:Selecting previously unselected package python3-prettytable. 2026-03-10T13:40:41.280 INFO:teuthology.orchestra.run.vm08.stdout:Preparing to unpack .../016-python3-prettytable_2.5.0-2_all.deb ... 2026-03-10T13:40:41.281 INFO:teuthology.orchestra.run.vm00.stdout:Selecting previously unselected package python3-pecan. 2026-03-10T13:40:41.282 INFO:teuthology.orchestra.run.vm08.stdout:Unpacking python3-prettytable (2.5.0-2) ... 2026-03-10T13:40:41.289 INFO:teuthology.orchestra.run.vm00.stdout:Preparing to unpack .../053-python3-pecan_1.3.3-4ubuntu2_all.deb ... 2026-03-10T13:40:41.291 INFO:teuthology.orchestra.run.vm00.stdout:Unpacking python3-pecan (1.3.3-4ubuntu2) ... 2026-03-10T13:40:41.332 INFO:teuthology.orchestra.run.vm08.stdout:Selecting previously unselected package python3-rbd. 2026-03-10T13:40:41.338 INFO:teuthology.orchestra.run.vm08.stdout:Preparing to unpack .../017-python3-rbd_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-10T13:40:41.339 INFO:teuthology.orchestra.run.vm08.stdout:Unpacking python3-rbd (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T13:40:41.353 INFO:teuthology.orchestra.run.vm00.stdout:Selecting previously unselected package python3-werkzeug. 2026-03-10T13:40:41.360 INFO:teuthology.orchestra.run.vm00.stdout:Preparing to unpack .../054-python3-werkzeug_2.0.2+dfsg1-1ubuntu0.22.04.3_all.deb ... 2026-03-10T13:40:41.366 INFO:teuthology.orchestra.run.vm00.stdout:Unpacking python3-werkzeug (2.0.2+dfsg1-1ubuntu0.22.04.3) ... 2026-03-10T13:40:41.366 INFO:teuthology.orchestra.run.vm08.stdout:Selecting previously unselected package librdkafka1:amd64. 2026-03-10T13:40:41.373 INFO:teuthology.orchestra.run.vm08.stdout:Preparing to unpack .../018-librdkafka1_1.8.0-1build1_amd64.deb ... 2026-03-10T13:40:41.380 INFO:teuthology.orchestra.run.vm08.stdout:Unpacking librdkafka1:amd64 (1.8.0-1build1) ... 2026-03-10T13:40:41.396 INFO:teuthology.orchestra.run.vm00.stdout:Selecting previously unselected package ceph-mgr-modules-core. 2026-03-10T13:40:41.400 INFO:teuthology.orchestra.run.vm08.stdout:Selecting previously unselected package libreadline-dev:amd64. 2026-03-10T13:40:41.401 INFO:teuthology.orchestra.run.vm00.stdout:Preparing to unpack .../055-ceph-mgr-modules-core_19.2.3-678-ge911bdeb-1jammy_all.deb ... 2026-03-10T13:40:41.402 INFO:teuthology.orchestra.run.vm00.stdout:Unpacking ceph-mgr-modules-core (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T13:40:41.405 INFO:teuthology.orchestra.run.vm08.stdout:Preparing to unpack .../019-libreadline-dev_8.1.2-1_amd64.deb ... 2026-03-10T13:40:41.406 INFO:teuthology.orchestra.run.vm08.stdout:Unpacking libreadline-dev:amd64 (8.1.2-1) ... 2026-03-10T13:40:41.423 INFO:teuthology.orchestra.run.vm08.stdout:Selecting previously unselected package liblua5.3-dev:amd64. 2026-03-10T13:40:41.428 INFO:teuthology.orchestra.run.vm08.stdout:Preparing to unpack .../020-liblua5.3-dev_5.3.6-1build1_amd64.deb ... 2026-03-10T13:40:41.429 INFO:teuthology.orchestra.run.vm08.stdout:Unpacking liblua5.3-dev:amd64 (5.3.6-1build1) ... 2026-03-10T13:40:41.439 INFO:teuthology.orchestra.run.vm00.stdout:Selecting previously unselected package libsqlite3-mod-ceph. 2026-03-10T13:40:41.445 INFO:teuthology.orchestra.run.vm00.stdout:Preparing to unpack .../056-libsqlite3-mod-ceph_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-10T13:40:41.445 INFO:teuthology.orchestra.run.vm00.stdout:Unpacking libsqlite3-mod-ceph (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T13:40:41.447 INFO:teuthology.orchestra.run.vm08.stdout:Selecting previously unselected package lua5.1. 2026-03-10T13:40:41.452 INFO:teuthology.orchestra.run.vm08.stdout:Preparing to unpack .../021-lua5.1_5.1.5-8.1build4_amd64.deb ... 2026-03-10T13:40:41.453 INFO:teuthology.orchestra.run.vm08.stdout:Unpacking lua5.1 (5.1.5-8.1build4) ... 2026-03-10T13:40:41.462 INFO:teuthology.orchestra.run.vm00.stdout:Selecting previously unselected package ceph-mgr. 2026-03-10T13:40:41.467 INFO:teuthology.orchestra.run.vm00.stdout:Preparing to unpack .../057-ceph-mgr_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-10T13:40:41.468 INFO:teuthology.orchestra.run.vm00.stdout:Unpacking ceph-mgr (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T13:40:41.471 INFO:teuthology.orchestra.run.vm08.stdout:Selecting previously unselected package lua-any. 2026-03-10T13:40:41.476 INFO:teuthology.orchestra.run.vm08.stdout:Preparing to unpack .../022-lua-any_27ubuntu1_all.deb ... 2026-03-10T13:40:41.477 INFO:teuthology.orchestra.run.vm08.stdout:Unpacking lua-any (27ubuntu1) ... 2026-03-10T13:40:41.493 INFO:teuthology.orchestra.run.vm08.stdout:Selecting previously unselected package zip. 2026-03-10T13:40:41.497 INFO:teuthology.orchestra.run.vm08.stdout:Preparing to unpack .../023-zip_3.0-12build2_amd64.deb ... 2026-03-10T13:40:41.498 INFO:teuthology.orchestra.run.vm08.stdout:Unpacking zip (3.0-12build2) ... 2026-03-10T13:40:41.501 INFO:teuthology.orchestra.run.vm00.stdout:Selecting previously unselected package ceph-mon. 2026-03-10T13:40:41.508 INFO:teuthology.orchestra.run.vm00.stdout:Preparing to unpack .../058-ceph-mon_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-10T13:40:41.509 INFO:teuthology.orchestra.run.vm00.stdout:Unpacking ceph-mon (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T13:40:41.515 INFO:teuthology.orchestra.run.vm08.stdout:Selecting previously unselected package unzip. 2026-03-10T13:40:41.516 INFO:teuthology.orchestra.run.vm07.stdout:Setting up libcephfs-dev (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T13:40:41.519 INFO:teuthology.orchestra.run.vm07.stdout:Setting up python3-rados (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T13:40:41.519 INFO:teuthology.orchestra.run.vm08.stdout:Preparing to unpack .../024-unzip_6.0-26ubuntu3.2_amd64.deb ... 2026-03-10T13:40:41.520 INFO:teuthology.orchestra.run.vm08.stdout:Unpacking unzip (6.0-26ubuntu3.2) ... 2026-03-10T13:40:41.521 INFO:teuthology.orchestra.run.vm07.stdout:Setting up librgw2 (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T13:40:41.526 INFO:teuthology.orchestra.run.vm07.stdout:Setting up python3-rbd (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T13:40:41.529 INFO:teuthology.orchestra.run.vm07.stdout:Setting up rbd-fuse (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T13:40:41.531 INFO:teuthology.orchestra.run.vm07.stdout:Setting up python3-rgw (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T13:40:41.534 INFO:teuthology.orchestra.run.vm07.stdout:Setting up python3-cephfs (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T13:40:41.536 INFO:teuthology.orchestra.run.vm07.stdout:Setting up ceph-common (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T13:40:41.540 INFO:teuthology.orchestra.run.vm08.stdout:Selecting previously unselected package luarocks. 2026-03-10T13:40:41.546 INFO:teuthology.orchestra.run.vm08.stdout:Preparing to unpack .../025-luarocks_3.8.0+dfsg1-1_all.deb ... 2026-03-10T13:40:41.546 INFO:teuthology.orchestra.run.vm08.stdout:Unpacking luarocks (3.8.0+dfsg1-1) ... 2026-03-10T13:40:41.605 INFO:teuthology.orchestra.run.vm07.stdout:Adding group ceph....done 2026-03-10T13:40:41.616 INFO:teuthology.orchestra.run.vm00.stdout:Selecting previously unselected package libfuse2:amd64. 2026-03-10T13:40:41.620 INFO:teuthology.orchestra.run.vm08.stdout:Selecting previously unselected package librgw2. 2026-03-10T13:40:41.621 INFO:teuthology.orchestra.run.vm00.stdout:Preparing to unpack .../059-libfuse2_2.9.9-5ubuntu3_amd64.deb ... 2026-03-10T13:40:41.623 INFO:teuthology.orchestra.run.vm00.stdout:Unpacking libfuse2:amd64 (2.9.9-5ubuntu3) ... 2026-03-10T13:40:41.625 INFO:teuthology.orchestra.run.vm08.stdout:Preparing to unpack .../026-librgw2_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-10T13:40:41.627 INFO:teuthology.orchestra.run.vm08.stdout:Unpacking librgw2 (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T13:40:41.646 INFO:teuthology.orchestra.run.vm07.stdout:Adding system user ceph....done 2026-03-10T13:40:41.648 INFO:teuthology.orchestra.run.vm00.stdout:Selecting previously unselected package ceph-osd. 2026-03-10T13:40:41.654 INFO:teuthology.orchestra.run.vm00.stdout:Preparing to unpack .../060-ceph-osd_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-10T13:40:41.655 INFO:teuthology.orchestra.run.vm07.stdout:Setting system user ceph properties....done 2026-03-10T13:40:41.655 INFO:teuthology.orchestra.run.vm00.stdout:Unpacking ceph-osd (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T13:40:41.659 INFO:teuthology.orchestra.run.vm07.stdout:chown: cannot access '/var/log/ceph/*.log*': No such file or directory 2026-03-10T13:40:41.725 INFO:teuthology.orchestra.run.vm07.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/ceph.target → /lib/systemd/system/ceph.target. 2026-03-10T13:40:41.750 INFO:teuthology.orchestra.run.vm08.stdout:Selecting previously unselected package python3-rgw. 2026-03-10T13:40:41.753 INFO:teuthology.orchestra.run.vm08.stdout:Preparing to unpack .../027-python3-rgw_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-10T13:40:41.754 INFO:teuthology.orchestra.run.vm08.stdout:Unpacking python3-rgw (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T13:40:41.774 INFO:teuthology.orchestra.run.vm08.stdout:Selecting previously unselected package liboath0:amd64. 2026-03-10T13:40:41.777 INFO:teuthology.orchestra.run.vm08.stdout:Preparing to unpack .../028-liboath0_2.6.7-3ubuntu0.1_amd64.deb ... 2026-03-10T13:40:41.778 INFO:teuthology.orchestra.run.vm08.stdout:Unpacking liboath0:amd64 (2.6.7-3ubuntu0.1) ... 2026-03-10T13:40:41.798 INFO:teuthology.orchestra.run.vm08.stdout:Selecting previously unselected package libradosstriper1. 2026-03-10T13:40:41.804 INFO:teuthology.orchestra.run.vm08.stdout:Preparing to unpack .../029-libradosstriper1_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-10T13:40:41.805 INFO:teuthology.orchestra.run.vm08.stdout:Unpacking libradosstriper1 (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T13:40:41.828 INFO:teuthology.orchestra.run.vm08.stdout:Selecting previously unselected package ceph-common. 2026-03-10T13:40:41.834 INFO:teuthology.orchestra.run.vm08.stdout:Preparing to unpack .../030-ceph-common_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-10T13:40:41.835 INFO:teuthology.orchestra.run.vm08.stdout:Unpacking ceph-common (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T13:40:41.960 INFO:teuthology.orchestra.run.vm07.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/rbdmap.service → /lib/systemd/system/rbdmap.service. 2026-03-10T13:40:41.976 INFO:teuthology.orchestra.run.vm00.stdout:Selecting previously unselected package ceph. 2026-03-10T13:40:41.981 INFO:teuthology.orchestra.run.vm00.stdout:Preparing to unpack .../061-ceph_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-10T13:40:41.982 INFO:teuthology.orchestra.run.vm00.stdout:Unpacking ceph (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T13:40:41.996 INFO:teuthology.orchestra.run.vm00.stdout:Selecting previously unselected package ceph-fuse. 2026-03-10T13:40:42.001 INFO:teuthology.orchestra.run.vm00.stdout:Preparing to unpack .../062-ceph-fuse_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-10T13:40:42.002 INFO:teuthology.orchestra.run.vm00.stdout:Unpacking ceph-fuse (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T13:40:42.034 INFO:teuthology.orchestra.run.vm00.stdout:Selecting previously unselected package ceph-mds. 2026-03-10T13:40:42.039 INFO:teuthology.orchestra.run.vm00.stdout:Preparing to unpack .../063-ceph-mds_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-10T13:40:42.040 INFO:teuthology.orchestra.run.vm00.stdout:Unpacking ceph-mds (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T13:40:42.339 INFO:teuthology.orchestra.run.vm07.stdout:Setting up ceph-test (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T13:40:42.340 INFO:teuthology.orchestra.run.vm00.stdout:Selecting previously unselected package cephadm. 2026-03-10T13:40:42.342 INFO:teuthology.orchestra.run.vm07.stdout:Setting up radosgw (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T13:40:42.346 INFO:teuthology.orchestra.run.vm00.stdout:Preparing to unpack .../064-cephadm_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-10T13:40:42.347 INFO:teuthology.orchestra.run.vm00.stdout:Unpacking cephadm (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T13:40:42.351 INFO:teuthology.orchestra.run.vm08.stdout:Selecting previously unselected package ceph-base. 2026-03-10T13:40:42.357 INFO:teuthology.orchestra.run.vm08.stdout:Preparing to unpack .../031-ceph-base_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-10T13:40:42.362 INFO:teuthology.orchestra.run.vm08.stdout:Unpacking ceph-base (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T13:40:42.368 INFO:teuthology.orchestra.run.vm00.stdout:Selecting previously unselected package python3-asyncssh. 2026-03-10T13:40:42.374 INFO:teuthology.orchestra.run.vm00.stdout:Preparing to unpack .../065-python3-asyncssh_2.5.0-1ubuntu0.1_all.deb ... 2026-03-10T13:40:42.375 INFO:teuthology.orchestra.run.vm00.stdout:Unpacking python3-asyncssh (2.5.0-1ubuntu0.1) ... 2026-03-10T13:40:42.404 INFO:teuthology.orchestra.run.vm00.stdout:Selecting previously unselected package ceph-mgr-cephadm. 2026-03-10T13:40:42.411 INFO:teuthology.orchestra.run.vm00.stdout:Preparing to unpack .../066-ceph-mgr-cephadm_19.2.3-678-ge911bdeb-1jammy_all.deb ... 2026-03-10T13:40:42.412 INFO:teuthology.orchestra.run.vm00.stdout:Unpacking ceph-mgr-cephadm (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T13:40:42.471 INFO:teuthology.orchestra.run.vm00.stdout:Selecting previously unselected package python3-repoze.lru. 2026-03-10T13:40:42.473 INFO:teuthology.orchestra.run.vm08.stdout:Selecting previously unselected package python3-jaraco.functools. 2026-03-10T13:40:42.478 INFO:teuthology.orchestra.run.vm00.stdout:Preparing to unpack .../067-python3-repoze.lru_0.7-2_all.deb ... 2026-03-10T13:40:42.479 INFO:teuthology.orchestra.run.vm00.stdout:Unpacking python3-repoze.lru (0.7-2) ... 2026-03-10T13:40:42.479 INFO:teuthology.orchestra.run.vm08.stdout:Preparing to unpack .../032-python3-jaraco.functools_3.4.0-2_all.deb ... 2026-03-10T13:40:42.480 INFO:teuthology.orchestra.run.vm08.stdout:Unpacking python3-jaraco.functools (3.4.0-2) ... 2026-03-10T13:40:42.496 INFO:teuthology.orchestra.run.vm08.stdout:Selecting previously unselected package python3-cheroot. 2026-03-10T13:40:42.496 INFO:teuthology.orchestra.run.vm00.stdout:Selecting previously unselected package python3-routes. 2026-03-10T13:40:42.502 INFO:teuthology.orchestra.run.vm08.stdout:Preparing to unpack .../033-python3-cheroot_8.5.2+ds1-1ubuntu3.1_all.deb ... 2026-03-10T13:40:42.503 INFO:teuthology.orchestra.run.vm08.stdout:Unpacking python3-cheroot (8.5.2+ds1-1ubuntu3.1) ... 2026-03-10T13:40:42.503 INFO:teuthology.orchestra.run.vm00.stdout:Preparing to unpack .../068-python3-routes_2.5.1-1ubuntu1_all.deb ... 2026-03-10T13:40:42.504 INFO:teuthology.orchestra.run.vm00.stdout:Unpacking python3-routes (2.5.1-1ubuntu1) ... 2026-03-10T13:40:42.545 INFO:teuthology.orchestra.run.vm08.stdout:Selecting previously unselected package python3-jaraco.classes. 2026-03-10T13:40:42.546 INFO:teuthology.orchestra.run.vm00.stdout:Selecting previously unselected package ceph-mgr-dashboard. 2026-03-10T13:40:42.551 INFO:teuthology.orchestra.run.vm00.stdout:Preparing to unpack .../069-ceph-mgr-dashboard_19.2.3-678-ge911bdeb-1jammy_all.deb ... 2026-03-10T13:40:42.552 INFO:teuthology.orchestra.run.vm08.stdout:Preparing to unpack .../034-python3-jaraco.classes_3.2.1-3_all.deb ... 2026-03-10T13:40:42.552 INFO:teuthology.orchestra.run.vm00.stdout:Unpacking ceph-mgr-dashboard (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T13:40:42.553 INFO:teuthology.orchestra.run.vm08.stdout:Unpacking python3-jaraco.classes (3.2.1-3) ... 2026-03-10T13:40:42.568 INFO:teuthology.orchestra.run.vm08.stdout:Selecting previously unselected package python3-jaraco.text. 2026-03-10T13:40:42.573 INFO:teuthology.orchestra.run.vm08.stdout:Preparing to unpack .../035-python3-jaraco.text_3.6.0-2_all.deb ... 2026-03-10T13:40:42.574 INFO:teuthology.orchestra.run.vm08.stdout:Unpacking python3-jaraco.text (3.6.0-2) ... 2026-03-10T13:40:42.588 INFO:teuthology.orchestra.run.vm08.stdout:Selecting previously unselected package python3-jaraco.collections. 2026-03-10T13:40:42.593 INFO:teuthology.orchestra.run.vm08.stdout:Preparing to unpack .../036-python3-jaraco.collections_3.4.0-2_all.deb ... 2026-03-10T13:40:42.594 INFO:teuthology.orchestra.run.vm08.stdout:Unpacking python3-jaraco.collections (3.4.0-2) ... 2026-03-10T13:40:42.595 INFO:teuthology.orchestra.run.vm07.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/ceph-radosgw.target → /lib/systemd/system/ceph-radosgw.target. 2026-03-10T13:40:42.595 INFO:teuthology.orchestra.run.vm07.stdout:Created symlink /etc/systemd/system/ceph.target.wants/ceph-radosgw.target → /lib/systemd/system/ceph-radosgw.target. 2026-03-10T13:40:42.608 INFO:teuthology.orchestra.run.vm08.stdout:Selecting previously unselected package python3-tempora. 2026-03-10T13:40:42.613 INFO:teuthology.orchestra.run.vm08.stdout:Preparing to unpack .../037-python3-tempora_4.1.2-1_all.deb ... 2026-03-10T13:40:42.614 INFO:teuthology.orchestra.run.vm08.stdout:Unpacking python3-tempora (4.1.2-1) ... 2026-03-10T13:40:42.628 INFO:teuthology.orchestra.run.vm08.stdout:Selecting previously unselected package python3-portend. 2026-03-10T13:40:42.633 INFO:teuthology.orchestra.run.vm08.stdout:Preparing to unpack .../038-python3-portend_3.0.0-1_all.deb ... 2026-03-10T13:40:42.634 INFO:teuthology.orchestra.run.vm08.stdout:Unpacking python3-portend (3.0.0-1) ... 2026-03-10T13:40:42.651 INFO:teuthology.orchestra.run.vm08.stdout:Selecting previously unselected package python3-zc.lockfile. 2026-03-10T13:40:42.656 INFO:teuthology.orchestra.run.vm08.stdout:Preparing to unpack .../039-python3-zc.lockfile_2.0-1_all.deb ... 2026-03-10T13:40:42.657 INFO:teuthology.orchestra.run.vm08.stdout:Unpacking python3-zc.lockfile (2.0-1) ... 2026-03-10T13:40:42.676 INFO:teuthology.orchestra.run.vm08.stdout:Selecting previously unselected package python3-cherrypy3. 2026-03-10T13:40:42.683 INFO:teuthology.orchestra.run.vm08.stdout:Preparing to unpack .../040-python3-cherrypy3_18.6.1-4_all.deb ... 2026-03-10T13:40:42.684 INFO:teuthology.orchestra.run.vm08.stdout:Unpacking python3-cherrypy3 (18.6.1-4) ... 2026-03-10T13:40:42.717 INFO:teuthology.orchestra.run.vm08.stdout:Selecting previously unselected package python3-natsort. 2026-03-10T13:40:42.724 INFO:teuthology.orchestra.run.vm08.stdout:Preparing to unpack .../041-python3-natsort_8.0.2-1_all.deb ... 2026-03-10T13:40:42.725 INFO:teuthology.orchestra.run.vm08.stdout:Unpacking python3-natsort (8.0.2-1) ... 2026-03-10T13:40:42.745 INFO:teuthology.orchestra.run.vm08.stdout:Selecting previously unselected package python3-logutils. 2026-03-10T13:40:42.751 INFO:teuthology.orchestra.run.vm08.stdout:Preparing to unpack .../042-python3-logutils_0.3.3-8_all.deb ... 2026-03-10T13:40:42.752 INFO:teuthology.orchestra.run.vm08.stdout:Unpacking python3-logutils (0.3.3-8) ... 2026-03-10T13:40:42.771 INFO:teuthology.orchestra.run.vm08.stdout:Selecting previously unselected package python3-mako. 2026-03-10T13:40:42.777 INFO:teuthology.orchestra.run.vm08.stdout:Preparing to unpack .../043-python3-mako_1.1.3+ds1-2ubuntu0.1_all.deb ... 2026-03-10T13:40:42.778 INFO:teuthology.orchestra.run.vm08.stdout:Unpacking python3-mako (1.1.3+ds1-2ubuntu0.1) ... 2026-03-10T13:40:42.934 INFO:teuthology.orchestra.run.vm08.stdout:Selecting previously unselected package python3-simplegeneric. 2026-03-10T13:40:42.940 INFO:teuthology.orchestra.run.vm08.stdout:Preparing to unpack .../044-python3-simplegeneric_0.8.1-3_all.deb ... 2026-03-10T13:40:42.941 INFO:teuthology.orchestra.run.vm08.stdout:Unpacking python3-simplegeneric (0.8.1-3) ... 2026-03-10T13:40:42.954 INFO:teuthology.orchestra.run.vm00.stdout:Selecting previously unselected package python3-sklearn-lib:amd64. 2026-03-10T13:40:42.958 INFO:teuthology.orchestra.run.vm08.stdout:Selecting previously unselected package python3-singledispatch. 2026-03-10T13:40:42.960 INFO:teuthology.orchestra.run.vm00.stdout:Preparing to unpack .../070-python3-sklearn-lib_0.23.2-5ubuntu6_amd64.deb ... 2026-03-10T13:40:42.961 INFO:teuthology.orchestra.run.vm00.stdout:Unpacking python3-sklearn-lib:amd64 (0.23.2-5ubuntu6) ... 2026-03-10T13:40:42.965 INFO:teuthology.orchestra.run.vm08.stdout:Preparing to unpack .../045-python3-singledispatch_3.4.0.3-3_all.deb ... 2026-03-10T13:40:42.967 INFO:teuthology.orchestra.run.vm08.stdout:Unpacking python3-singledispatch (3.4.0.3-3) ... 2026-03-10T13:40:42.973 INFO:teuthology.orchestra.run.vm07.stdout:Setting up ceph-base (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T13:40:42.982 INFO:teuthology.orchestra.run.vm08.stdout:Selecting previously unselected package python3-webob. 2026-03-10T13:40:42.988 INFO:teuthology.orchestra.run.vm08.stdout:Preparing to unpack .../046-python3-webob_1%3a1.8.6-1.1ubuntu0.1_all.deb ... 2026-03-10T13:40:42.989 INFO:teuthology.orchestra.run.vm08.stdout:Unpacking python3-webob (1:1.8.6-1.1ubuntu0.1) ... 2026-03-10T13:40:43.027 INFO:teuthology.orchestra.run.vm08.stdout:Selecting previously unselected package python3-waitress. 2026-03-10T13:40:43.027 INFO:teuthology.orchestra.run.vm00.stdout:Selecting previously unselected package python3-joblib. 2026-03-10T13:40:43.032 INFO:teuthology.orchestra.run.vm08.stdout:Preparing to unpack .../047-python3-waitress_1.4.4-1.1ubuntu1.1_all.deb ... 2026-03-10T13:40:43.033 INFO:teuthology.orchestra.run.vm00.stdout:Preparing to unpack .../071-python3-joblib_0.17.0-4ubuntu1_all.deb ... 2026-03-10T13:40:43.033 INFO:teuthology.orchestra.run.vm08.stdout:Unpacking python3-waitress (1.4.4-1.1ubuntu1.1) ... 2026-03-10T13:40:43.034 INFO:teuthology.orchestra.run.vm00.stdout:Unpacking python3-joblib (0.17.0-4ubuntu1) ... 2026-03-10T13:40:43.053 INFO:teuthology.orchestra.run.vm08.stdout:Selecting previously unselected package python3-tempita. 2026-03-10T13:40:43.058 INFO:teuthology.orchestra.run.vm08.stdout:Preparing to unpack .../048-python3-tempita_0.5.2-6ubuntu1_all.deb ... 2026-03-10T13:40:43.059 INFO:teuthology.orchestra.run.vm07.stdout:Created symlink /etc/systemd/system/ceph.target.wants/ceph-crash.service → /lib/systemd/system/ceph-crash.service. 2026-03-10T13:40:43.060 INFO:teuthology.orchestra.run.vm08.stdout:Unpacking python3-tempita (0.5.2-6ubuntu1) ... 2026-03-10T13:40:43.071 INFO:teuthology.orchestra.run.vm00.stdout:Selecting previously unselected package python3-threadpoolctl. 2026-03-10T13:40:43.075 INFO:teuthology.orchestra.run.vm08.stdout:Selecting previously unselected package python3-paste. 2026-03-10T13:40:43.076 INFO:teuthology.orchestra.run.vm00.stdout:Preparing to unpack .../072-python3-threadpoolctl_3.1.0-1_all.deb ... 2026-03-10T13:40:43.078 INFO:teuthology.orchestra.run.vm00.stdout:Unpacking python3-threadpoolctl (3.1.0-1) ... 2026-03-10T13:40:43.081 INFO:teuthology.orchestra.run.vm08.stdout:Preparing to unpack .../049-python3-paste_3.5.0+dfsg1-1_all.deb ... 2026-03-10T13:40:43.082 INFO:teuthology.orchestra.run.vm08.stdout:Unpacking python3-paste (3.5.0+dfsg1-1) ... 2026-03-10T13:40:43.098 INFO:teuthology.orchestra.run.vm00.stdout:Selecting previously unselected package python3-sklearn. 2026-03-10T13:40:43.104 INFO:teuthology.orchestra.run.vm00.stdout:Preparing to unpack .../073-python3-sklearn_0.23.2-5ubuntu6_all.deb ... 2026-03-10T13:40:43.105 INFO:teuthology.orchestra.run.vm00.stdout:Unpacking python3-sklearn (0.23.2-5ubuntu6) ... 2026-03-10T13:40:43.118 INFO:teuthology.orchestra.run.vm08.stdout:Selecting previously unselected package python-pastedeploy-tpl. 2026-03-10T13:40:43.124 INFO:teuthology.orchestra.run.vm08.stdout:Preparing to unpack .../050-python-pastedeploy-tpl_2.1.1-1_all.deb ... 2026-03-10T13:40:43.125 INFO:teuthology.orchestra.run.vm08.stdout:Unpacking python-pastedeploy-tpl (2.1.1-1) ... 2026-03-10T13:40:43.138 INFO:teuthology.orchestra.run.vm08.stdout:Selecting previously unselected package python3-pastedeploy. 2026-03-10T13:40:43.142 INFO:teuthology.orchestra.run.vm08.stdout:Preparing to unpack .../051-python3-pastedeploy_2.1.1-1_all.deb ... 2026-03-10T13:40:43.143 INFO:teuthology.orchestra.run.vm08.stdout:Unpacking python3-pastedeploy (2.1.1-1) ... 2026-03-10T13:40:43.159 INFO:teuthology.orchestra.run.vm08.stdout:Selecting previously unselected package python3-webtest. 2026-03-10T13:40:43.163 INFO:teuthology.orchestra.run.vm08.stdout:Preparing to unpack .../052-python3-webtest_2.0.35-1_all.deb ... 2026-03-10T13:40:43.163 INFO:teuthology.orchestra.run.vm08.stdout:Unpacking python3-webtest (2.0.35-1) ... 2026-03-10T13:40:43.196 INFO:teuthology.orchestra.run.vm08.stdout:Selecting previously unselected package python3-pecan. 2026-03-10T13:40:43.201 INFO:teuthology.orchestra.run.vm08.stdout:Preparing to unpack .../053-python3-pecan_1.3.3-4ubuntu2_all.deb ... 2026-03-10T13:40:43.202 INFO:teuthology.orchestra.run.vm08.stdout:Unpacking python3-pecan (1.3.3-4ubuntu2) ... 2026-03-10T13:40:43.236 INFO:teuthology.orchestra.run.vm00.stdout:Selecting previously unselected package ceph-mgr-diskprediction-local. 2026-03-10T13:40:43.238 INFO:teuthology.orchestra.run.vm08.stdout:Selecting previously unselected package python3-werkzeug. 2026-03-10T13:40:43.243 INFO:teuthology.orchestra.run.vm00.stdout:Preparing to unpack .../074-ceph-mgr-diskprediction-local_19.2.3-678-ge911bdeb-1jammy_all.deb ... 2026-03-10T13:40:43.244 INFO:teuthology.orchestra.run.vm00.stdout:Unpacking ceph-mgr-diskprediction-local (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T13:40:43.245 INFO:teuthology.orchestra.run.vm08.stdout:Preparing to unpack .../054-python3-werkzeug_2.0.2+dfsg1-1ubuntu0.22.04.3_all.deb ... 2026-03-10T13:40:43.246 INFO:teuthology.orchestra.run.vm08.stdout:Unpacking python3-werkzeug (2.0.2+dfsg1-1ubuntu0.22.04.3) ... 2026-03-10T13:40:43.281 INFO:teuthology.orchestra.run.vm08.stdout:Selecting previously unselected package ceph-mgr-modules-core. 2026-03-10T13:40:43.287 INFO:teuthology.orchestra.run.vm08.stdout:Preparing to unpack .../055-ceph-mgr-modules-core_19.2.3-678-ge911bdeb-1jammy_all.deb ... 2026-03-10T13:40:43.288 INFO:teuthology.orchestra.run.vm08.stdout:Unpacking ceph-mgr-modules-core (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T13:40:43.326 INFO:teuthology.orchestra.run.vm08.stdout:Selecting previously unselected package libsqlite3-mod-ceph. 2026-03-10T13:40:43.332 INFO:teuthology.orchestra.run.vm08.stdout:Preparing to unpack .../056-libsqlite3-mod-ceph_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-10T13:40:43.333 INFO:teuthology.orchestra.run.vm08.stdout:Unpacking libsqlite3-mod-ceph (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T13:40:43.367 INFO:teuthology.orchestra.run.vm08.stdout:Selecting previously unselected package ceph-mgr. 2026-03-10T13:40:43.374 INFO:teuthology.orchestra.run.vm08.stdout:Preparing to unpack .../057-ceph-mgr_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-10T13:40:43.375 INFO:teuthology.orchestra.run.vm08.stdout:Unpacking ceph-mgr (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T13:40:43.412 INFO:teuthology.orchestra.run.vm08.stdout:Selecting previously unselected package ceph-mon. 2026-03-10T13:40:43.418 INFO:teuthology.orchestra.run.vm08.stdout:Preparing to unpack .../058-ceph-mon_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-10T13:40:43.419 INFO:teuthology.orchestra.run.vm08.stdout:Unpacking ceph-mon (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T13:40:43.563 INFO:teuthology.orchestra.run.vm07.stdout:Setting up ceph-mds (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T13:40:43.581 INFO:teuthology.orchestra.run.vm08.stdout:Selecting previously unselected package libfuse2:amd64. 2026-03-10T13:40:43.583 INFO:teuthology.orchestra.run.vm00.stdout:Selecting previously unselected package python3-cachetools. 2026-03-10T13:40:43.586 INFO:teuthology.orchestra.run.vm08.stdout:Preparing to unpack .../059-libfuse2_2.9.9-5ubuntu3_amd64.deb ... 2026-03-10T13:40:43.587 INFO:teuthology.orchestra.run.vm08.stdout:Unpacking libfuse2:amd64 (2.9.9-5ubuntu3) ... 2026-03-10T13:40:43.589 INFO:teuthology.orchestra.run.vm00.stdout:Preparing to unpack .../075-python3-cachetools_5.0.0-1_all.deb ... 2026-03-10T13:40:43.590 INFO:teuthology.orchestra.run.vm00.stdout:Unpacking python3-cachetools (5.0.0-1) ... 2026-03-10T13:40:43.605 INFO:teuthology.orchestra.run.vm08.stdout:Selecting previously unselected package ceph-osd. 2026-03-10T13:40:43.607 INFO:teuthology.orchestra.run.vm00.stdout:Selecting previously unselected package python3-rsa. 2026-03-10T13:40:43.610 INFO:teuthology.orchestra.run.vm08.stdout:Preparing to unpack .../060-ceph-osd_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-10T13:40:43.611 INFO:teuthology.orchestra.run.vm08.stdout:Unpacking ceph-osd (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T13:40:43.613 INFO:teuthology.orchestra.run.vm00.stdout:Preparing to unpack .../076-python3-rsa_4.8-1_all.deb ... 2026-03-10T13:40:43.614 INFO:teuthology.orchestra.run.vm00.stdout:Unpacking python3-rsa (4.8-1) ... 2026-03-10T13:40:43.636 INFO:teuthology.orchestra.run.vm07.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/ceph-mds.target → /lib/systemd/system/ceph-mds.target. 2026-03-10T13:40:43.636 INFO:teuthology.orchestra.run.vm07.stdout:Created symlink /etc/systemd/system/ceph.target.wants/ceph-mds.target → /lib/systemd/system/ceph-mds.target. 2026-03-10T13:40:43.637 INFO:teuthology.orchestra.run.vm00.stdout:Selecting previously unselected package python3-google-auth. 2026-03-10T13:40:43.644 INFO:teuthology.orchestra.run.vm00.stdout:Preparing to unpack .../077-python3-google-auth_1.5.1-3_all.deb ... 2026-03-10T13:40:43.645 INFO:teuthology.orchestra.run.vm00.stdout:Unpacking python3-google-auth (1.5.1-3) ... 2026-03-10T13:40:43.668 INFO:teuthology.orchestra.run.vm00.stdout:Selecting previously unselected package python3-requests-oauthlib. 2026-03-10T13:40:43.675 INFO:teuthology.orchestra.run.vm00.stdout:Preparing to unpack .../078-python3-requests-oauthlib_1.3.0+ds-0.1_all.deb ... 2026-03-10T13:40:43.676 INFO:teuthology.orchestra.run.vm00.stdout:Unpacking python3-requests-oauthlib (1.3.0+ds-0.1) ... 2026-03-10T13:40:43.697 INFO:teuthology.orchestra.run.vm00.stdout:Selecting previously unselected package python3-websocket. 2026-03-10T13:40:43.705 INFO:teuthology.orchestra.run.vm00.stdout:Preparing to unpack .../079-python3-websocket_1.2.3-1_all.deb ... 2026-03-10T13:40:43.707 INFO:teuthology.orchestra.run.vm00.stdout:Unpacking python3-websocket (1.2.3-1) ... 2026-03-10T13:40:43.733 INFO:teuthology.orchestra.run.vm00.stdout:Selecting previously unselected package python3-kubernetes. 2026-03-10T13:40:43.739 INFO:teuthology.orchestra.run.vm00.stdout:Preparing to unpack .../080-python3-kubernetes_12.0.1-1ubuntu1_all.deb ... 2026-03-10T13:40:43.753 INFO:teuthology.orchestra.run.vm00.stdout:Unpacking python3-kubernetes (12.0.1-1ubuntu1) ... 2026-03-10T13:40:43.971 INFO:teuthology.orchestra.run.vm08.stdout:Selecting previously unselected package ceph. 2026-03-10T13:40:43.976 INFO:teuthology.orchestra.run.vm08.stdout:Preparing to unpack .../061-ceph_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-10T13:40:43.977 INFO:teuthology.orchestra.run.vm08.stdout:Unpacking ceph (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T13:40:43.994 INFO:teuthology.orchestra.run.vm08.stdout:Selecting previously unselected package ceph-fuse. 2026-03-10T13:40:43.999 INFO:teuthology.orchestra.run.vm08.stdout:Preparing to unpack .../062-ceph-fuse_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-10T13:40:44.001 INFO:teuthology.orchestra.run.vm08.stdout:Unpacking ceph-fuse (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T13:40:44.002 INFO:teuthology.orchestra.run.vm00.stdout:Selecting previously unselected package ceph-mgr-k8sevents. 2026-03-10T13:40:44.009 INFO:teuthology.orchestra.run.vm00.stdout:Preparing to unpack .../081-ceph-mgr-k8sevents_19.2.3-678-ge911bdeb-1jammy_all.deb ... 2026-03-10T13:40:44.011 INFO:teuthology.orchestra.run.vm00.stdout:Unpacking ceph-mgr-k8sevents (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T13:40:44.036 INFO:teuthology.orchestra.run.vm00.stdout:Selecting previously unselected package libonig5:amd64. 2026-03-10T13:40:44.038 INFO:teuthology.orchestra.run.vm08.stdout:Selecting previously unselected package ceph-mds. 2026-03-10T13:40:44.040 INFO:teuthology.orchestra.run.vm07.stdout:Setting up ceph-mgr (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T13:40:44.043 INFO:teuthology.orchestra.run.vm00.stdout:Preparing to unpack .../082-libonig5_6.9.7.1-2build1_amd64.deb ... 2026-03-10T13:40:44.044 INFO:teuthology.orchestra.run.vm00.stdout:Unpacking libonig5:amd64 (6.9.7.1-2build1) ... 2026-03-10T13:40:44.045 INFO:teuthology.orchestra.run.vm08.stdout:Preparing to unpack .../063-ceph-mds_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-10T13:40:44.045 INFO:teuthology.orchestra.run.vm08.stdout:Unpacking ceph-mds (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T13:40:44.065 INFO:teuthology.orchestra.run.vm00.stdout:Selecting previously unselected package libjq1:amd64. 2026-03-10T13:40:44.072 INFO:teuthology.orchestra.run.vm00.stdout:Preparing to unpack .../083-libjq1_1.6-2.1ubuntu3.1_amd64.deb ... 2026-03-10T13:40:44.073 INFO:teuthology.orchestra.run.vm00.stdout:Unpacking libjq1:amd64 (1.6-2.1ubuntu3.1) ... 2026-03-10T13:40:44.103 INFO:teuthology.orchestra.run.vm00.stdout:Selecting previously unselected package jq. 2026-03-10T13:40:44.104 INFO:teuthology.orchestra.run.vm08.stdout:Selecting previously unselected package cephadm. 2026-03-10T13:40:44.106 INFO:teuthology.orchestra.run.vm07.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/ceph-mgr.target → /lib/systemd/system/ceph-mgr.target. 2026-03-10T13:40:44.106 INFO:teuthology.orchestra.run.vm07.stdout:Created symlink /etc/systemd/system/ceph.target.wants/ceph-mgr.target → /lib/systemd/system/ceph-mgr.target. 2026-03-10T13:40:44.109 INFO:teuthology.orchestra.run.vm00.stdout:Preparing to unpack .../084-jq_1.6-2.1ubuntu3.1_amd64.deb ... 2026-03-10T13:40:44.110 INFO:teuthology.orchestra.run.vm08.stdout:Preparing to unpack .../064-cephadm_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-10T13:40:44.111 INFO:teuthology.orchestra.run.vm00.stdout:Unpacking jq (1.6-2.1ubuntu3.1) ... 2026-03-10T13:40:44.111 INFO:teuthology.orchestra.run.vm08.stdout:Unpacking cephadm (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T13:40:44.129 INFO:teuthology.orchestra.run.vm00.stdout:Selecting previously unselected package socat. 2026-03-10T13:40:44.132 INFO:teuthology.orchestra.run.vm08.stdout:Selecting previously unselected package python3-asyncssh. 2026-03-10T13:40:44.136 INFO:teuthology.orchestra.run.vm00.stdout:Preparing to unpack .../085-socat_1.7.4.1-3ubuntu4_amd64.deb ... 2026-03-10T13:40:44.137 INFO:teuthology.orchestra.run.vm00.stdout:Unpacking socat (1.7.4.1-3ubuntu4) ... 2026-03-10T13:40:44.138 INFO:teuthology.orchestra.run.vm08.stdout:Preparing to unpack .../065-python3-asyncssh_2.5.0-1ubuntu0.1_all.deb ... 2026-03-10T13:40:44.139 INFO:teuthology.orchestra.run.vm08.stdout:Unpacking python3-asyncssh (2.5.0-1ubuntu0.1) ... 2026-03-10T13:40:44.167 INFO:teuthology.orchestra.run.vm00.stdout:Selecting previously unselected package xmlstarlet. 2026-03-10T13:40:44.168 INFO:teuthology.orchestra.run.vm08.stdout:Selecting previously unselected package ceph-mgr-cephadm. 2026-03-10T13:40:44.173 INFO:teuthology.orchestra.run.vm00.stdout:Preparing to unpack .../086-xmlstarlet_1.6.1-2.1_amd64.deb ... 2026-03-10T13:40:44.174 INFO:teuthology.orchestra.run.vm08.stdout:Preparing to unpack .../066-ceph-mgr-cephadm_19.2.3-678-ge911bdeb-1jammy_all.deb ... 2026-03-10T13:40:44.175 INFO:teuthology.orchestra.run.vm00.stdout:Unpacking xmlstarlet (1.6.1-2.1) ... 2026-03-10T13:40:44.175 INFO:teuthology.orchestra.run.vm08.stdout:Unpacking ceph-mgr-cephadm (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T13:40:44.203 INFO:teuthology.orchestra.run.vm08.stdout:Selecting previously unselected package python3-repoze.lru. 2026-03-10T13:40:44.208 INFO:teuthology.orchestra.run.vm08.stdout:Preparing to unpack .../067-python3-repoze.lru_0.7-2_all.deb ... 2026-03-10T13:40:44.209 INFO:teuthology.orchestra.run.vm08.stdout:Unpacking python3-repoze.lru (0.7-2) ... 2026-03-10T13:40:44.222 INFO:teuthology.orchestra.run.vm00.stdout:Selecting previously unselected package ceph-test. 2026-03-10T13:40:44.228 INFO:teuthology.orchestra.run.vm00.stdout:Preparing to unpack .../087-ceph-test_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-10T13:40:44.228 INFO:teuthology.orchestra.run.vm08.stdout:Selecting previously unselected package python3-routes. 2026-03-10T13:40:44.229 INFO:teuthology.orchestra.run.vm00.stdout:Unpacking ceph-test (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T13:40:44.234 INFO:teuthology.orchestra.run.vm08.stdout:Preparing to unpack .../068-python3-routes_2.5.1-1ubuntu1_all.deb ... 2026-03-10T13:40:44.235 INFO:teuthology.orchestra.run.vm08.stdout:Unpacking python3-routes (2.5.1-1ubuntu1) ... 2026-03-10T13:40:44.270 INFO:teuthology.orchestra.run.vm08.stdout:Selecting previously unselected package ceph-mgr-dashboard. 2026-03-10T13:40:44.276 INFO:teuthology.orchestra.run.vm08.stdout:Preparing to unpack .../069-ceph-mgr-dashboard_19.2.3-678-ge911bdeb-1jammy_all.deb ... 2026-03-10T13:40:44.277 INFO:teuthology.orchestra.run.vm08.stdout:Unpacking ceph-mgr-dashboard (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T13:40:44.482 INFO:teuthology.orchestra.run.vm07.stdout:Setting up ceph-osd (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T13:40:44.577 INFO:teuthology.orchestra.run.vm07.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/ceph-osd.target → /lib/systemd/system/ceph-osd.target. 2026-03-10T13:40:44.577 INFO:teuthology.orchestra.run.vm07.stdout:Created symlink /etc/systemd/system/ceph.target.wants/ceph-osd.target → /lib/systemd/system/ceph-osd.target. 2026-03-10T13:40:44.867 INFO:teuthology.orchestra.run.vm08.stdout:Selecting previously unselected package python3-sklearn-lib:amd64. 2026-03-10T13:40:44.874 INFO:teuthology.orchestra.run.vm08.stdout:Preparing to unpack .../070-python3-sklearn-lib_0.23.2-5ubuntu6_amd64.deb ... 2026-03-10T13:40:44.894 INFO:teuthology.orchestra.run.vm08.stdout:Unpacking python3-sklearn-lib:amd64 (0.23.2-5ubuntu6) ... 2026-03-10T13:40:45.034 INFO:teuthology.orchestra.run.vm07.stdout:Setting up ceph-mgr-k8sevents (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T13:40:45.037 INFO:teuthology.orchestra.run.vm07.stdout:Setting up ceph-mgr-diskprediction-local (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T13:40:45.048 INFO:teuthology.orchestra.run.vm08.stdout:Selecting previously unselected package python3-joblib. 2026-03-10T13:40:45.051 INFO:teuthology.orchestra.run.vm07.stdout:Setting up ceph-mon (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T13:40:45.055 INFO:teuthology.orchestra.run.vm08.stdout:Preparing to unpack .../071-python3-joblib_0.17.0-4ubuntu1_all.deb ... 2026-03-10T13:40:45.056 INFO:teuthology.orchestra.run.vm08.stdout:Unpacking python3-joblib (0.17.0-4ubuntu1) ... 2026-03-10T13:40:45.057 INFO:teuthology.orchestra.run.vm00.stdout:Selecting previously unselected package ceph-volume. 2026-03-10T13:40:45.061 INFO:teuthology.orchestra.run.vm00.stdout:Preparing to unpack .../088-ceph-volume_19.2.3-678-ge911bdeb-1jammy_all.deb ... 2026-03-10T13:40:45.062 INFO:teuthology.orchestra.run.vm00.stdout:Unpacking ceph-volume (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T13:40:45.093 INFO:teuthology.orchestra.run.vm08.stdout:Selecting previously unselected package python3-threadpoolctl. 2026-03-10T13:40:45.097 INFO:teuthology.orchestra.run.vm00.stdout:Selecting previously unselected package libcephfs-dev. 2026-03-10T13:40:45.100 INFO:teuthology.orchestra.run.vm08.stdout:Preparing to unpack .../072-python3-threadpoolctl_3.1.0-1_all.deb ... 2026-03-10T13:40:45.101 INFO:teuthology.orchestra.run.vm08.stdout:Unpacking python3-threadpoolctl (3.1.0-1) ... 2026-03-10T13:40:45.103 INFO:teuthology.orchestra.run.vm00.stdout:Preparing to unpack .../089-libcephfs-dev_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-10T13:40:45.103 INFO:teuthology.orchestra.run.vm00.stdout:Unpacking libcephfs-dev (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T13:40:45.119 INFO:teuthology.orchestra.run.vm07.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/ceph-mon.target → /lib/systemd/system/ceph-mon.target. 2026-03-10T13:40:45.119 INFO:teuthology.orchestra.run.vm07.stdout:Created symlink /etc/systemd/system/ceph.target.wants/ceph-mon.target → /lib/systemd/system/ceph-mon.target. 2026-03-10T13:40:45.120 INFO:teuthology.orchestra.run.vm08.stdout:Selecting previously unselected package python3-sklearn. 2026-03-10T13:40:45.121 INFO:teuthology.orchestra.run.vm00.stdout:Selecting previously unselected package lua-socket:amd64. 2026-03-10T13:40:45.126 INFO:teuthology.orchestra.run.vm00.stdout:Preparing to unpack .../090-lua-socket_3.0~rc1+git+ac3201d-6_amd64.deb ... 2026-03-10T13:40:45.127 INFO:teuthology.orchestra.run.vm08.stdout:Preparing to unpack .../073-python3-sklearn_0.23.2-5ubuntu6_all.deb ... 2026-03-10T13:40:45.128 INFO:teuthology.orchestra.run.vm00.stdout:Unpacking lua-socket:amd64 (3.0~rc1+git+ac3201d-6) ... 2026-03-10T13:40:45.128 INFO:teuthology.orchestra.run.vm08.stdout:Unpacking python3-sklearn (0.23.2-5ubuntu6) ... 2026-03-10T13:40:45.160 INFO:teuthology.orchestra.run.vm00.stdout:Selecting previously unselected package lua-sec:amd64. 2026-03-10T13:40:45.166 INFO:teuthology.orchestra.run.vm00.stdout:Preparing to unpack .../091-lua-sec_1.0.2-1_amd64.deb ... 2026-03-10T13:40:45.167 INFO:teuthology.orchestra.run.vm00.stdout:Unpacking lua-sec:amd64 (1.0.2-1) ... 2026-03-10T13:40:45.189 INFO:teuthology.orchestra.run.vm00.stdout:Selecting previously unselected package nvme-cli. 2026-03-10T13:40:45.195 INFO:teuthology.orchestra.run.vm00.stdout:Preparing to unpack .../092-nvme-cli_1.16-3ubuntu0.3_amd64.deb ... 2026-03-10T13:40:45.197 INFO:teuthology.orchestra.run.vm00.stdout:Unpacking nvme-cli (1.16-3ubuntu0.3) ... 2026-03-10T13:40:45.256 INFO:teuthology.orchestra.run.vm00.stdout:Selecting previously unselected package pkg-config. 2026-03-10T13:40:45.262 INFO:teuthology.orchestra.run.vm00.stdout:Preparing to unpack .../093-pkg-config_0.29.2-1ubuntu3_amd64.deb ... 2026-03-10T13:40:45.263 INFO:teuthology.orchestra.run.vm00.stdout:Unpacking pkg-config (0.29.2-1ubuntu3) ... 2026-03-10T13:40:45.281 INFO:teuthology.orchestra.run.vm00.stdout:Selecting previously unselected package python-asyncssh-doc. 2026-03-10T13:40:45.281 INFO:teuthology.orchestra.run.vm08.stdout:Selecting previously unselected package ceph-mgr-diskprediction-local. 2026-03-10T13:40:45.287 INFO:teuthology.orchestra.run.vm00.stdout:Preparing to unpack .../094-python-asyncssh-doc_2.5.0-1ubuntu0.1_all.deb ... 2026-03-10T13:40:45.288 INFO:teuthology.orchestra.run.vm08.stdout:Preparing to unpack .../074-ceph-mgr-diskprediction-local_19.2.3-678-ge911bdeb-1jammy_all.deb ... 2026-03-10T13:40:45.288 INFO:teuthology.orchestra.run.vm00.stdout:Unpacking python-asyncssh-doc (2.5.0-1ubuntu0.1) ... 2026-03-10T13:40:45.289 INFO:teuthology.orchestra.run.vm08.stdout:Unpacking ceph-mgr-diskprediction-local (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T13:40:45.343 INFO:teuthology.orchestra.run.vm00.stdout:Selecting previously unselected package python3-iniconfig. 2026-03-10T13:40:45.349 INFO:teuthology.orchestra.run.vm00.stdout:Preparing to unpack .../095-python3-iniconfig_1.1.1-2_all.deb ... 2026-03-10T13:40:45.350 INFO:teuthology.orchestra.run.vm00.stdout:Unpacking python3-iniconfig (1.1.1-2) ... 2026-03-10T13:40:45.370 INFO:teuthology.orchestra.run.vm00.stdout:Selecting previously unselected package python3-pastescript. 2026-03-10T13:40:45.378 INFO:teuthology.orchestra.run.vm00.stdout:Preparing to unpack .../096-python3-pastescript_2.0.2-4_all.deb ... 2026-03-10T13:40:45.380 INFO:teuthology.orchestra.run.vm00.stdout:Unpacking python3-pastescript (2.0.2-4) ... 2026-03-10T13:40:45.406 INFO:teuthology.orchestra.run.vm00.stdout:Selecting previously unselected package python3-pluggy. 2026-03-10T13:40:45.414 INFO:teuthology.orchestra.run.vm00.stdout:Preparing to unpack .../097-python3-pluggy_0.13.0-7.1_all.deb ... 2026-03-10T13:40:45.415 INFO:teuthology.orchestra.run.vm00.stdout:Unpacking python3-pluggy (0.13.0-7.1) ... 2026-03-10T13:40:45.439 INFO:teuthology.orchestra.run.vm00.stdout:Selecting previously unselected package python3-psutil. 2026-03-10T13:40:45.447 INFO:teuthology.orchestra.run.vm00.stdout:Preparing to unpack .../098-python3-psutil_5.9.0-1build1_amd64.deb ... 2026-03-10T13:40:45.447 INFO:teuthology.orchestra.run.vm00.stdout:Unpacking python3-psutil (5.9.0-1build1) ... 2026-03-10T13:40:45.471 INFO:teuthology.orchestra.run.vm00.stdout:Selecting previously unselected package python3-py. 2026-03-10T13:40:45.477 INFO:teuthology.orchestra.run.vm00.stdout:Preparing to unpack .../099-python3-py_1.10.0-1_all.deb ... 2026-03-10T13:40:45.574 INFO:teuthology.orchestra.run.vm00.stdout:Unpacking python3-py (1.10.0-1) ... 2026-03-10T13:40:45.575 INFO:teuthology.orchestra.run.vm07.stdout:Setting up ceph-mgr-cephadm (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T13:40:45.591 INFO:teuthology.orchestra.run.vm07.stdout:Setting up ceph (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T13:40:45.593 INFO:teuthology.orchestra.run.vm08.stdout:Selecting previously unselected package python3-cachetools. 2026-03-10T13:40:45.595 INFO:teuthology.orchestra.run.vm07.stdout:Setting up ceph-mgr-dashboard (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T13:40:45.600 INFO:teuthology.orchestra.run.vm08.stdout:Preparing to unpack .../075-python3-cachetools_5.0.0-1_all.deb ... 2026-03-10T13:40:45.601 INFO:teuthology.orchestra.run.vm08.stdout:Unpacking python3-cachetools (5.0.0-1) ... 2026-03-10T13:40:45.601 INFO:teuthology.orchestra.run.vm00.stdout:Selecting previously unselected package python3-pygments. 2026-03-10T13:40:45.609 INFO:teuthology.orchestra.run.vm00.stdout:Preparing to unpack .../100-python3-pygments_2.11.2+dfsg-2ubuntu0.1_all.deb ... 2026-03-10T13:40:45.609 INFO:teuthology.orchestra.run.vm07.stdout:Setting up ceph-volume (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T13:40:45.610 INFO:teuthology.orchestra.run.vm00.stdout:Unpacking python3-pygments (2.11.2+dfsg-2ubuntu0.1) ... 2026-03-10T13:40:45.620 INFO:teuthology.orchestra.run.vm08.stdout:Selecting previously unselected package python3-rsa. 2026-03-10T13:40:45.626 INFO:teuthology.orchestra.run.vm08.stdout:Preparing to unpack .../076-python3-rsa_4.8-1_all.deb ... 2026-03-10T13:40:45.628 INFO:teuthology.orchestra.run.vm08.stdout:Unpacking python3-rsa (4.8-1) ... 2026-03-10T13:40:45.657 INFO:teuthology.orchestra.run.vm08.stdout:Selecting previously unselected package python3-google-auth. 2026-03-10T13:40:45.663 INFO:teuthology.orchestra.run.vm08.stdout:Preparing to unpack .../077-python3-google-auth_1.5.1-3_all.deb ... 2026-03-10T13:40:45.665 INFO:teuthology.orchestra.run.vm08.stdout:Unpacking python3-google-auth (1.5.1-3) ... 2026-03-10T13:40:45.670 INFO:teuthology.orchestra.run.vm00.stdout:Selecting previously unselected package python3-pyinotify. 2026-03-10T13:40:45.677 INFO:teuthology.orchestra.run.vm00.stdout:Preparing to unpack .../101-python3-pyinotify_0.9.6-1.3_all.deb ... 2026-03-10T13:40:45.678 INFO:teuthology.orchestra.run.vm00.stdout:Unpacking python3-pyinotify (0.9.6-1.3) ... 2026-03-10T13:40:45.686 INFO:teuthology.orchestra.run.vm08.stdout:Selecting previously unselected package python3-requests-oauthlib. 2026-03-10T13:40:45.692 INFO:teuthology.orchestra.run.vm08.stdout:Preparing to unpack .../078-python3-requests-oauthlib_1.3.0+ds-0.1_all.deb ... 2026-03-10T13:40:45.693 INFO:teuthology.orchestra.run.vm08.stdout:Unpacking python3-requests-oauthlib (1.3.0+ds-0.1) ... 2026-03-10T13:40:45.695 INFO:teuthology.orchestra.run.vm00.stdout:Selecting previously unselected package python3-toml. 2026-03-10T13:40:45.702 INFO:teuthology.orchestra.run.vm00.stdout:Preparing to unpack .../102-python3-toml_0.10.2-1_all.deb ... 2026-03-10T13:40:45.704 INFO:teuthology.orchestra.run.vm00.stdout:Unpacking python3-toml (0.10.2-1) ... 2026-03-10T13:40:45.712 INFO:teuthology.orchestra.run.vm08.stdout:Selecting previously unselected package python3-websocket. 2026-03-10T13:40:45.718 INFO:teuthology.orchestra.run.vm08.stdout:Preparing to unpack .../079-python3-websocket_1.2.3-1_all.deb ... 2026-03-10T13:40:45.719 INFO:teuthology.orchestra.run.vm08.stdout:Unpacking python3-websocket (1.2.3-1) ... 2026-03-10T13:40:45.723 INFO:teuthology.orchestra.run.vm00.stdout:Selecting previously unselected package python3-pytest. 2026-03-10T13:40:45.730 INFO:teuthology.orchestra.run.vm00.stdout:Preparing to unpack .../103-python3-pytest_6.2.5-1ubuntu2_all.deb ... 2026-03-10T13:40:45.731 INFO:teuthology.orchestra.run.vm00.stdout:Unpacking python3-pytest (6.2.5-1ubuntu2) ... 2026-03-10T13:40:45.741 INFO:teuthology.orchestra.run.vm08.stdout:Selecting previously unselected package python3-kubernetes. 2026-03-10T13:40:45.747 INFO:teuthology.orchestra.run.vm07.stdout:Processing triggers for mailcap (3.70+nmu1ubuntu1) ... 2026-03-10T13:40:45.749 INFO:teuthology.orchestra.run.vm08.stdout:Preparing to unpack .../080-python3-kubernetes_12.0.1-1ubuntu1_all.deb ... 2026-03-10T13:40:45.755 INFO:teuthology.orchestra.run.vm07.stdout:Processing triggers for libc-bin (2.35-0ubuntu3.13) ... 2026-03-10T13:40:45.761 INFO:teuthology.orchestra.run.vm00.stdout:Selecting previously unselected package python3-simplejson. 2026-03-10T13:40:45.764 INFO:teuthology.orchestra.run.vm08.stdout:Unpacking python3-kubernetes (12.0.1-1ubuntu1) ... 2026-03-10T13:40:45.768 INFO:teuthology.orchestra.run.vm00.stdout:Preparing to unpack .../104-python3-simplejson_3.17.6-1build1_amd64.deb ... 2026-03-10T13:40:45.770 INFO:teuthology.orchestra.run.vm00.stdout:Unpacking python3-simplejson (3.17.6-1build1) ... 2026-03-10T13:40:45.772 INFO:teuthology.orchestra.run.vm07.stdout:Processing triggers for man-db (2.10.2-1) ... 2026-03-10T13:40:45.790 INFO:teuthology.orchestra.run.vm00.stdout:Selecting previously unselected package qttranslations5-l10n. 2026-03-10T13:40:45.796 INFO:teuthology.orchestra.run.vm00.stdout:Preparing to unpack .../105-qttranslations5-l10n_5.15.3-1_all.deb ... 2026-03-10T13:40:45.798 INFO:teuthology.orchestra.run.vm00.stdout:Unpacking qttranslations5-l10n (5.15.3-1) ... 2026-03-10T13:40:45.945 INFO:teuthology.orchestra.run.vm07.stdout:Processing triggers for install-info (6.8-4build1) ... 2026-03-10T13:40:45.971 INFO:teuthology.orchestra.run.vm00.stdout:Selecting previously unselected package radosgw. 2026-03-10T13:40:45.978 INFO:teuthology.orchestra.run.vm00.stdout:Preparing to unpack .../106-radosgw_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-10T13:40:45.979 INFO:teuthology.orchestra.run.vm00.stdout:Unpacking radosgw (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T13:40:45.990 INFO:teuthology.orchestra.run.vm08.stdout:Selecting previously unselected package ceph-mgr-k8sevents. 2026-03-10T13:40:45.994 INFO:teuthology.orchestra.run.vm08.stdout:Preparing to unpack .../081-ceph-mgr-k8sevents_19.2.3-678-ge911bdeb-1jammy_all.deb ... 2026-03-10T13:40:45.995 INFO:teuthology.orchestra.run.vm08.stdout:Unpacking ceph-mgr-k8sevents (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T13:40:46.014 INFO:teuthology.orchestra.run.vm08.stdout:Selecting previously unselected package libonig5:amd64. 2026-03-10T13:40:46.018 INFO:teuthology.orchestra.run.vm08.stdout:Preparing to unpack .../082-libonig5_6.9.7.1-2build1_amd64.deb ... 2026-03-10T13:40:46.020 INFO:teuthology.orchestra.run.vm08.stdout:Unpacking libonig5:amd64 (6.9.7.1-2build1) ... 2026-03-10T13:40:46.042 INFO:teuthology.orchestra.run.vm08.stdout:Selecting previously unselected package libjq1:amd64. 2026-03-10T13:40:46.048 INFO:teuthology.orchestra.run.vm08.stdout:Preparing to unpack .../083-libjq1_1.6-2.1ubuntu3.1_amd64.deb ... 2026-03-10T13:40:46.049 INFO:teuthology.orchestra.run.vm08.stdout:Unpacking libjq1:amd64 (1.6-2.1ubuntu3.1) ... 2026-03-10T13:40:46.070 INFO:teuthology.orchestra.run.vm08.stdout:Selecting previously unselected package jq. 2026-03-10T13:40:46.076 INFO:teuthology.orchestra.run.vm08.stdout:Preparing to unpack .../084-jq_1.6-2.1ubuntu3.1_amd64.deb ... 2026-03-10T13:40:46.077 INFO:teuthology.orchestra.run.vm08.stdout:Unpacking jq (1.6-2.1ubuntu3.1) ... 2026-03-10T13:40:46.098 INFO:teuthology.orchestra.run.vm08.stdout:Selecting previously unselected package socat. 2026-03-10T13:40:46.103 INFO:teuthology.orchestra.run.vm08.stdout:Preparing to unpack .../085-socat_1.7.4.1-3ubuntu4_amd64.deb ... 2026-03-10T13:40:46.104 INFO:teuthology.orchestra.run.vm08.stdout:Unpacking socat (1.7.4.1-3ubuntu4) ... 2026-03-10T13:40:46.134 INFO:teuthology.orchestra.run.vm08.stdout:Selecting previously unselected package xmlstarlet. 2026-03-10T13:40:46.139 INFO:teuthology.orchestra.run.vm08.stdout:Preparing to unpack .../086-xmlstarlet_1.6.1-2.1_amd64.deb ... 2026-03-10T13:40:46.207 INFO:teuthology.orchestra.run.vm08.stdout:Unpacking xmlstarlet (1.6.1-2.1) ... 2026-03-10T13:40:46.221 INFO:teuthology.orchestra.run.vm00.stdout:Selecting previously unselected package rbd-fuse. 2026-03-10T13:40:46.228 INFO:teuthology.orchestra.run.vm00.stdout:Preparing to unpack .../107-rbd-fuse_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-10T13:40:46.229 INFO:teuthology.orchestra.run.vm00.stdout:Unpacking rbd-fuse (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T13:40:46.253 INFO:teuthology.orchestra.run.vm00.stdout:Selecting previously unselected package smartmontools. 2026-03-10T13:40:46.256 INFO:teuthology.orchestra.run.vm08.stdout:Selecting previously unselected package ceph-test. 2026-03-10T13:40:46.256 INFO:teuthology.orchestra.run.vm00.stdout:Preparing to unpack .../108-smartmontools_7.2-1ubuntu0.1_amd64.deb ... 2026-03-10T13:40:46.263 INFO:teuthology.orchestra.run.vm08.stdout:Preparing to unpack .../087-ceph-test_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-10T13:40:46.267 INFO:teuthology.orchestra.run.vm00.stdout:Unpacking smartmontools (7.2-1ubuntu0.1) ... 2026-03-10T13:40:46.270 INFO:teuthology.orchestra.run.vm08.stdout:Unpacking ceph-test (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T13:40:46.293 INFO:teuthology.orchestra.run.vm07.stdout: 2026-03-10T13:40:46.293 INFO:teuthology.orchestra.run.vm07.stdout:Running kernel seems to be up-to-date. 2026-03-10T13:40:46.293 INFO:teuthology.orchestra.run.vm07.stdout: 2026-03-10T13:40:46.293 INFO:teuthology.orchestra.run.vm07.stdout:Services to be restarted: 2026-03-10T13:40:46.301 INFO:teuthology.orchestra.run.vm07.stdout: systemctl restart packagekit.service 2026-03-10T13:40:46.305 INFO:teuthology.orchestra.run.vm07.stdout: 2026-03-10T13:40:46.306 INFO:teuthology.orchestra.run.vm07.stdout:Service restarts being deferred: 2026-03-10T13:40:46.306 INFO:teuthology.orchestra.run.vm07.stdout: systemctl restart networkd-dispatcher.service 2026-03-10T13:40:46.306 INFO:teuthology.orchestra.run.vm07.stdout: systemctl restart unattended-upgrades.service 2026-03-10T13:40:46.306 INFO:teuthology.orchestra.run.vm07.stdout: 2026-03-10T13:40:46.306 INFO:teuthology.orchestra.run.vm07.stdout:No containers need to be restarted. 2026-03-10T13:40:46.306 INFO:teuthology.orchestra.run.vm07.stdout: 2026-03-10T13:40:46.306 INFO:teuthology.orchestra.run.vm07.stdout:No user sessions are running outdated binaries. 2026-03-10T13:40:46.306 INFO:teuthology.orchestra.run.vm07.stdout: 2026-03-10T13:40:46.306 INFO:teuthology.orchestra.run.vm07.stdout:No VM guests are running outdated hypervisor (qemu) binaries on this host. 2026-03-10T13:40:46.318 INFO:teuthology.orchestra.run.vm00.stdout:Setting up smartmontools (7.2-1ubuntu0.1) ... 2026-03-10T13:40:46.567 INFO:teuthology.orchestra.run.vm00.stdout:Created symlink /etc/systemd/system/smartd.service → /lib/systemd/system/smartmontools.service. 2026-03-10T13:40:46.567 INFO:teuthology.orchestra.run.vm00.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/smartmontools.service → /lib/systemd/system/smartmontools.service. 2026-03-10T13:40:47.036 INFO:teuthology.orchestra.run.vm00.stdout:Setting up python3-iniconfig (1.1.1-2) ... 2026-03-10T13:40:47.104 INFO:teuthology.orchestra.run.vm07.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-10T13:40:47.107 DEBUG:teuthology.orchestra.run.vm07:> sudo DEBIAN_FRONTEND=noninteractive apt-get -y --force-yes -o Dpkg::Options::="--force-confdef" -o Dpkg::Options::="--force-confold" install python3-xmltodict python3-jmespath 2026-03-10T13:40:47.180 INFO:teuthology.orchestra.run.vm07.stdout:Reading package lists... 2026-03-10T13:40:47.235 INFO:teuthology.orchestra.run.vm08.stdout:Selecting previously unselected package ceph-volume. 2026-03-10T13:40:47.241 INFO:teuthology.orchestra.run.vm08.stdout:Preparing to unpack .../088-ceph-volume_19.2.3-678-ge911bdeb-1jammy_all.deb ... 2026-03-10T13:40:47.242 INFO:teuthology.orchestra.run.vm08.stdout:Unpacking ceph-volume (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T13:40:47.276 INFO:teuthology.orchestra.run.vm08.stdout:Selecting previously unselected package libcephfs-dev. 2026-03-10T13:40:47.282 INFO:teuthology.orchestra.run.vm08.stdout:Preparing to unpack .../089-libcephfs-dev_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-10T13:40:47.283 INFO:teuthology.orchestra.run.vm00.stdout:Setting up libdouble-conversion3:amd64 (3.1.7-4) ... 2026-03-10T13:40:47.284 INFO:teuthology.orchestra.run.vm08.stdout:Unpacking libcephfs-dev (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T13:40:47.285 INFO:teuthology.orchestra.run.vm00.stdout:Setting up nvme-cli (1.16-3ubuntu0.3) ... 2026-03-10T13:40:47.301 INFO:teuthology.orchestra.run.vm08.stdout:Selecting previously unselected package lua-socket:amd64. 2026-03-10T13:40:47.306 INFO:teuthology.orchestra.run.vm08.stdout:Preparing to unpack .../090-lua-socket_3.0~rc1+git+ac3201d-6_amd64.deb ... 2026-03-10T13:40:47.307 INFO:teuthology.orchestra.run.vm08.stdout:Unpacking lua-socket:amd64 (3.0~rc1+git+ac3201d-6) ... 2026-03-10T13:40:47.333 INFO:teuthology.orchestra.run.vm08.stdout:Selecting previously unselected package lua-sec:amd64. 2026-03-10T13:40:47.339 INFO:teuthology.orchestra.run.vm08.stdout:Preparing to unpack .../091-lua-sec_1.0.2-1_amd64.deb ... 2026-03-10T13:40:47.340 INFO:teuthology.orchestra.run.vm08.stdout:Unpacking lua-sec:amd64 (1.0.2-1) ... 2026-03-10T13:40:47.351 INFO:teuthology.orchestra.run.vm00.stdout:Created symlink /etc/systemd/system/default.target.wants/nvmefc-boot-connections.service → /lib/systemd/system/nvmefc-boot-connections.service. 2026-03-10T13:40:47.359 INFO:teuthology.orchestra.run.vm08.stdout:Selecting previously unselected package nvme-cli. 2026-03-10T13:40:47.364 INFO:teuthology.orchestra.run.vm08.stdout:Preparing to unpack .../092-nvme-cli_1.16-3ubuntu0.3_amd64.deb ... 2026-03-10T13:40:47.365 INFO:teuthology.orchestra.run.vm08.stdout:Unpacking nvme-cli (1.16-3ubuntu0.3) ... 2026-03-10T13:40:47.386 INFO:teuthology.orchestra.run.vm07.stdout:Building dependency tree... 2026-03-10T13:40:47.387 INFO:teuthology.orchestra.run.vm07.stdout:Reading state information... 2026-03-10T13:40:47.407 INFO:teuthology.orchestra.run.vm08.stdout:Selecting previously unselected package pkg-config. 2026-03-10T13:40:47.414 INFO:teuthology.orchestra.run.vm08.stdout:Preparing to unpack .../093-pkg-config_0.29.2-1ubuntu3_amd64.deb ... 2026-03-10T13:40:47.416 INFO:teuthology.orchestra.run.vm08.stdout:Unpacking pkg-config (0.29.2-1ubuntu3) ... 2026-03-10T13:40:47.431 INFO:teuthology.orchestra.run.vm08.stdout:Selecting previously unselected package python-asyncssh-doc. 2026-03-10T13:40:47.437 INFO:teuthology.orchestra.run.vm08.stdout:Preparing to unpack .../094-python-asyncssh-doc_2.5.0-1ubuntu0.1_all.deb ... 2026-03-10T13:40:47.438 INFO:teuthology.orchestra.run.vm08.stdout:Unpacking python-asyncssh-doc (2.5.0-1ubuntu0.1) ... 2026-03-10T13:40:47.484 INFO:teuthology.orchestra.run.vm08.stdout:Selecting previously unselected package python3-iniconfig. 2026-03-10T13:40:47.492 INFO:teuthology.orchestra.run.vm08.stdout:Preparing to unpack .../095-python3-iniconfig_1.1.1-2_all.deb ... 2026-03-10T13:40:47.493 INFO:teuthology.orchestra.run.vm08.stdout:Unpacking python3-iniconfig (1.1.1-2) ... 2026-03-10T13:40:47.510 INFO:teuthology.orchestra.run.vm08.stdout:Selecting previously unselected package python3-pastescript. 2026-03-10T13:40:47.516 INFO:teuthology.orchestra.run.vm08.stdout:Preparing to unpack .../096-python3-pastescript_2.0.2-4_all.deb ... 2026-03-10T13:40:47.517 INFO:teuthology.orchestra.run.vm08.stdout:Unpacking python3-pastescript (2.0.2-4) ... 2026-03-10T13:40:47.541 INFO:teuthology.orchestra.run.vm08.stdout:Selecting previously unselected package python3-pluggy. 2026-03-10T13:40:47.547 INFO:teuthology.orchestra.run.vm08.stdout:Preparing to unpack .../097-python3-pluggy_0.13.0-7.1_all.deb ... 2026-03-10T13:40:47.549 INFO:teuthology.orchestra.run.vm08.stdout:Unpacking python3-pluggy (0.13.0-7.1) ... 2026-03-10T13:40:47.567 INFO:teuthology.orchestra.run.vm08.stdout:Selecting previously unselected package python3-psutil. 2026-03-10T13:40:47.573 INFO:teuthology.orchestra.run.vm08.stdout:Preparing to unpack .../098-python3-psutil_5.9.0-1build1_amd64.deb ... 2026-03-10T13:40:47.574 INFO:teuthology.orchestra.run.vm08.stdout:Unpacking python3-psutil (5.9.0-1build1) ... 2026-03-10T13:40:47.592 INFO:teuthology.orchestra.run.vm07.stdout:The following packages were automatically installed and are no longer required: 2026-03-10T13:40:47.592 INFO:teuthology.orchestra.run.vm07.stdout: kpartx libboost-iostreams1.74.0 libboost-thread1.74.0 libpmemobj1 2026-03-10T13:40:47.592 INFO:teuthology.orchestra.run.vm07.stdout: libsgutils2-2 sg3-utils sg3-utils-udev 2026-03-10T13:40:47.592 INFO:teuthology.orchestra.run.vm07.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-10T13:40:47.598 INFO:teuthology.orchestra.run.vm08.stdout:Selecting previously unselected package python3-py. 2026-03-10T13:40:47.601 INFO:teuthology.orchestra.run.vm00.stdout:Created symlink /etc/systemd/system/default.target.wants/nvmf-autoconnect.service → /lib/systemd/system/nvmf-autoconnect.service. 2026-03-10T13:40:47.604 INFO:teuthology.orchestra.run.vm08.stdout:Preparing to unpack .../099-python3-py_1.10.0-1_all.deb ... 2026-03-10T13:40:47.605 INFO:teuthology.orchestra.run.vm08.stdout:Unpacking python3-py (1.10.0-1) ... 2026-03-10T13:40:47.609 INFO:teuthology.orchestra.run.vm07.stdout:The following NEW packages will be installed: 2026-03-10T13:40:47.609 INFO:teuthology.orchestra.run.vm07.stdout: python3-jmespath python3-xmltodict 2026-03-10T13:40:47.629 INFO:teuthology.orchestra.run.vm08.stdout:Selecting previously unselected package python3-pygments. 2026-03-10T13:40:47.635 INFO:teuthology.orchestra.run.vm08.stdout:Preparing to unpack .../100-python3-pygments_2.11.2+dfsg-2ubuntu0.1_all.deb ... 2026-03-10T13:40:47.637 INFO:teuthology.orchestra.run.vm08.stdout:Unpacking python3-pygments (2.11.2+dfsg-2ubuntu0.1) ... 2026-03-10T13:40:47.698 INFO:teuthology.orchestra.run.vm08.stdout:Selecting previously unselected package python3-pyinotify. 2026-03-10T13:40:47.704 INFO:teuthology.orchestra.run.vm08.stdout:Preparing to unpack .../101-python3-pyinotify_0.9.6-1.3_all.deb ... 2026-03-10T13:40:47.705 INFO:teuthology.orchestra.run.vm08.stdout:Unpacking python3-pyinotify (0.9.6-1.3) ... 2026-03-10T13:40:47.721 INFO:teuthology.orchestra.run.vm08.stdout:Selecting previously unselected package python3-toml. 2026-03-10T13:40:47.726 INFO:teuthology.orchestra.run.vm08.stdout:Preparing to unpack .../102-python3-toml_0.10.2-1_all.deb ... 2026-03-10T13:40:47.727 INFO:teuthology.orchestra.run.vm08.stdout:Unpacking python3-toml (0.10.2-1) ... 2026-03-10T13:40:47.744 INFO:teuthology.orchestra.run.vm08.stdout:Selecting previously unselected package python3-pytest. 2026-03-10T13:40:47.751 INFO:teuthology.orchestra.run.vm08.stdout:Preparing to unpack .../103-python3-pytest_6.2.5-1ubuntu2_all.deb ... 2026-03-10T13:40:47.752 INFO:teuthology.orchestra.run.vm08.stdout:Unpacking python3-pytest (6.2.5-1ubuntu2) ... 2026-03-10T13:40:47.783 INFO:teuthology.orchestra.run.vm08.stdout:Selecting previously unselected package python3-simplejson. 2026-03-10T13:40:47.789 INFO:teuthology.orchestra.run.vm08.stdout:Preparing to unpack .../104-python3-simplejson_3.17.6-1build1_amd64.deb ... 2026-03-10T13:40:47.790 INFO:teuthology.orchestra.run.vm08.stdout:Unpacking python3-simplejson (3.17.6-1build1) ... 2026-03-10T13:40:47.808 INFO:teuthology.orchestra.run.vm08.stdout:Selecting previously unselected package qttranslations5-l10n. 2026-03-10T13:40:47.813 INFO:teuthology.orchestra.run.vm08.stdout:Preparing to unpack .../105-qttranslations5-l10n_5.15.3-1_all.deb ... 2026-03-10T13:40:47.814 INFO:teuthology.orchestra.run.vm08.stdout:Unpacking qttranslations5-l10n (5.15.3-1) ... 2026-03-10T13:40:47.932 INFO:teuthology.orchestra.run.vm08.stdout:Selecting previously unselected package radosgw. 2026-03-10T13:40:47.938 INFO:teuthology.orchestra.run.vm08.stdout:Preparing to unpack .../106-radosgw_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-10T13:40:47.939 INFO:teuthology.orchestra.run.vm08.stdout:Unpacking radosgw (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T13:40:47.971 INFO:teuthology.orchestra.run.vm00.stdout:nvmf-connect.target is a disabled or a static unit, not starting it. 2026-03-10T13:40:47.979 INFO:teuthology.orchestra.run.vm00.stdout:Could not execute systemctl: at /usr/bin/deb-systemd-invoke line 142. 2026-03-10T13:40:47.990 INFO:teuthology.orchestra.run.vm00.stdout:Setting up cephadm (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T13:40:48.036 INFO:teuthology.orchestra.run.vm00.stdout:Adding system user cephadm....done 2026-03-10T13:40:48.046 INFO:teuthology.orchestra.run.vm00.stdout:Setting up python3-waitress (1.4.4-1.1ubuntu1.1) ... 2026-03-10T13:40:48.059 INFO:teuthology.orchestra.run.vm07.stdout:0 upgraded, 2 newly installed, 0 to remove and 12 not upgraded. 2026-03-10T13:40:48.059 INFO:teuthology.orchestra.run.vm07.stdout:Need to get 34.3 kB of archives. 2026-03-10T13:40:48.059 INFO:teuthology.orchestra.run.vm07.stdout:After this operation, 146 kB of additional disk space will be used. 2026-03-10T13:40:48.059 INFO:teuthology.orchestra.run.vm07.stdout:Get:1 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-jmespath all 0.10.0-1 [21.7 kB] 2026-03-10T13:40:48.135 INFO:teuthology.orchestra.run.vm00.stdout:Setting up python3-jaraco.classes (3.2.1-3) ... 2026-03-10T13:40:48.151 INFO:teuthology.orchestra.run.vm08.stdout:Selecting previously unselected package rbd-fuse. 2026-03-10T13:40:48.157 INFO:teuthology.orchestra.run.vm08.stdout:Preparing to unpack .../107-rbd-fuse_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-10T13:40:48.158 INFO:teuthology.orchestra.run.vm08.stdout:Unpacking rbd-fuse (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T13:40:48.175 INFO:teuthology.orchestra.run.vm08.stdout:Selecting previously unselected package smartmontools. 2026-03-10T13:40:48.180 INFO:teuthology.orchestra.run.vm08.stdout:Preparing to unpack .../108-smartmontools_7.2-1ubuntu0.1_amd64.deb ... 2026-03-10T13:40:48.188 INFO:teuthology.orchestra.run.vm08.stdout:Unpacking smartmontools (7.2-1ubuntu0.1) ... 2026-03-10T13:40:48.205 INFO:teuthology.orchestra.run.vm00.stdout:Setting up python-asyncssh-doc (2.5.0-1ubuntu0.1) ... 2026-03-10T13:40:48.207 INFO:teuthology.orchestra.run.vm00.stdout:Setting up python3-jaraco.functools (3.4.0-2) ... 2026-03-10T13:40:48.233 INFO:teuthology.orchestra.run.vm08.stdout:Setting up smartmontools (7.2-1ubuntu0.1) ... 2026-03-10T13:40:48.272 INFO:teuthology.orchestra.run.vm07.stdout:Get:2 https://archive.ubuntu.com/ubuntu jammy/universe amd64 python3-xmltodict all 0.12.0-2 [12.6 kB] 2026-03-10T13:40:48.280 INFO:teuthology.orchestra.run.vm00.stdout:Setting up python3-repoze.lru (0.7-2) ... 2026-03-10T13:40:48.357 INFO:teuthology.orchestra.run.vm00.stdout:Setting up liboath0:amd64 (2.6.7-3ubuntu0.1) ... 2026-03-10T13:40:48.360 INFO:teuthology.orchestra.run.vm00.stdout:Setting up python3-py (1.10.0-1) ... 2026-03-10T13:40:48.451 INFO:teuthology.orchestra.run.vm00.stdout:Setting up python3-joblib (0.17.0-4ubuntu1) ... 2026-03-10T13:40:48.490 INFO:teuthology.orchestra.run.vm08.stdout:Created symlink /etc/systemd/system/smartd.service → /lib/systemd/system/smartmontools.service. 2026-03-10T13:40:48.490 INFO:teuthology.orchestra.run.vm08.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/smartmontools.service → /lib/systemd/system/smartmontools.service. 2026-03-10T13:40:48.500 INFO:teuthology.orchestra.run.vm07.stdout:Fetched 34.3 kB in 1s (51.4 kB/s) 2026-03-10T13:40:48.516 INFO:teuthology.orchestra.run.vm07.stdout:Selecting previously unselected package python3-jmespath. 2026-03-10T13:40:48.553 INFO:teuthology.orchestra.run.vm07.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 118577 files and directories currently installed.) 2026-03-10T13:40:48.555 INFO:teuthology.orchestra.run.vm07.stdout:Preparing to unpack .../python3-jmespath_0.10.0-1_all.deb ... 2026-03-10T13:40:48.557 INFO:teuthology.orchestra.run.vm07.stdout:Unpacking python3-jmespath (0.10.0-1) ... 2026-03-10T13:40:48.574 INFO:teuthology.orchestra.run.vm07.stdout:Selecting previously unselected package python3-xmltodict. 2026-03-10T13:40:48.581 INFO:teuthology.orchestra.run.vm07.stdout:Preparing to unpack .../python3-xmltodict_0.12.0-2_all.deb ... 2026-03-10T13:40:48.581 INFO:teuthology.orchestra.run.vm07.stdout:Unpacking python3-xmltodict (0.12.0-2) ... 2026-03-10T13:40:48.601 INFO:teuthology.orchestra.run.vm00.stdout:Setting up python3-cachetools (5.0.0-1) ... 2026-03-10T13:40:48.614 INFO:teuthology.orchestra.run.vm07.stdout:Setting up python3-xmltodict (0.12.0-2) ... 2026-03-10T13:40:48.678 INFO:teuthology.orchestra.run.vm00.stdout:Setting up unzip (6.0-26ubuntu3.2) ... 2026-03-10T13:40:48.687 INFO:teuthology.orchestra.run.vm00.stdout:Setting up python3-pyinotify (0.9.6-1.3) ... 2026-03-10T13:40:48.716 INFO:teuthology.orchestra.run.vm07.stdout:Setting up python3-jmespath (0.10.0-1) ... 2026-03-10T13:40:48.766 INFO:teuthology.orchestra.run.vm00.stdout:Setting up python3-threadpoolctl (3.1.0-1) ... 2026-03-10T13:40:48.836 INFO:teuthology.orchestra.run.vm00.stdout:Setting up python3-ceph-argparse (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T13:40:48.877 INFO:teuthology.orchestra.run.vm08.stdout:Setting up python3-iniconfig (1.1.1-2) ... 2026-03-10T13:40:48.914 INFO:teuthology.orchestra.run.vm00.stdout:Setting up python3-sklearn-lib:amd64 (0.23.2-5ubuntu6) ... 2026-03-10T13:40:48.917 INFO:teuthology.orchestra.run.vm00.stdout:Setting up libnbd0 (1.10.5-1) ... 2026-03-10T13:40:48.920 INFO:teuthology.orchestra.run.vm00.stdout:Setting up lua-socket:amd64 (3.0~rc1+git+ac3201d-6) ... 2026-03-10T13:40:48.922 INFO:teuthology.orchestra.run.vm00.stdout:Setting up libreadline-dev:amd64 (8.1.2-1) ... 2026-03-10T13:40:48.925 INFO:teuthology.orchestra.run.vm00.stdout:Setting up libfuse2:amd64 (2.9.9-5ubuntu3) ... 2026-03-10T13:40:48.927 INFO:teuthology.orchestra.run.vm00.stdout:Setting up lua5.1 (5.1.5-8.1build4) ... 2026-03-10T13:40:48.931 INFO:teuthology.orchestra.run.vm00.stdout:update-alternatives: using /usr/bin/lua5.1 to provide /usr/bin/lua (lua-interpreter) in auto mode 2026-03-10T13:40:48.934 INFO:teuthology.orchestra.run.vm00.stdout:update-alternatives: using /usr/bin/luac5.1 to provide /usr/bin/luac (lua-compiler) in auto mode 2026-03-10T13:40:48.936 INFO:teuthology.orchestra.run.vm00.stdout:Setting up libpcre2-16-0:amd64 (10.39-3ubuntu0.1) ... 2026-03-10T13:40:48.938 INFO:teuthology.orchestra.run.vm00.stdout:Setting up python3-psutil (5.9.0-1build1) ... 2026-03-10T13:40:48.945 INFO:teuthology.orchestra.run.vm08.stdout:Setting up libdouble-conversion3:amd64 (3.1.7-4) ... 2026-03-10T13:40:48.948 INFO:teuthology.orchestra.run.vm08.stdout:Setting up nvme-cli (1.16-3ubuntu0.3) ... 2026-03-10T13:40:49.016 INFO:teuthology.orchestra.run.vm08.stdout:Created symlink /etc/systemd/system/default.target.wants/nvmefc-boot-connections.service → /lib/systemd/system/nvmefc-boot-connections.service. 2026-03-10T13:40:49.056 INFO:teuthology.orchestra.run.vm07.stdout: 2026-03-10T13:40:49.056 INFO:teuthology.orchestra.run.vm07.stdout:Running kernel seems to be up-to-date. 2026-03-10T13:40:49.056 INFO:teuthology.orchestra.run.vm07.stdout: 2026-03-10T13:40:49.056 INFO:teuthology.orchestra.run.vm07.stdout:Services to be restarted: 2026-03-10T13:40:49.062 INFO:teuthology.orchestra.run.vm07.stdout: systemctl restart packagekit.service 2026-03-10T13:40:49.064 INFO:teuthology.orchestra.run.vm07.stdout: 2026-03-10T13:40:49.065 INFO:teuthology.orchestra.run.vm07.stdout:Service restarts being deferred: 2026-03-10T13:40:49.065 INFO:teuthology.orchestra.run.vm07.stdout: systemctl restart networkd-dispatcher.service 2026-03-10T13:40:49.065 INFO:teuthology.orchestra.run.vm07.stdout: systemctl restart unattended-upgrades.service 2026-03-10T13:40:49.065 INFO:teuthology.orchestra.run.vm07.stdout: 2026-03-10T13:40:49.065 INFO:teuthology.orchestra.run.vm07.stdout:No containers need to be restarted. 2026-03-10T13:40:49.065 INFO:teuthology.orchestra.run.vm07.stdout: 2026-03-10T13:40:49.065 INFO:teuthology.orchestra.run.vm07.stdout:No user sessions are running outdated binaries. 2026-03-10T13:40:49.065 INFO:teuthology.orchestra.run.vm07.stdout: 2026-03-10T13:40:49.065 INFO:teuthology.orchestra.run.vm07.stdout:No VM guests are running outdated hypervisor (qemu) binaries on this host. 2026-03-10T13:40:49.067 INFO:teuthology.orchestra.run.vm00.stdout:Setting up python3-natsort (8.0.2-1) ... 2026-03-10T13:40:49.144 INFO:teuthology.orchestra.run.vm00.stdout:Setting up python3-routes (2.5.1-1ubuntu1) ... 2026-03-10T13:40:49.219 INFO:teuthology.orchestra.run.vm00.stdout:Setting up python3-simplejson (3.17.6-1build1) ... 2026-03-10T13:40:49.261 INFO:teuthology.orchestra.run.vm08.stdout:Created symlink /etc/systemd/system/default.target.wants/nvmf-autoconnect.service → /lib/systemd/system/nvmf-autoconnect.service. 2026-03-10T13:40:49.298 INFO:teuthology.orchestra.run.vm00.stdout:Setting up zip (3.0-12build2) ... 2026-03-10T13:40:49.301 INFO:teuthology.orchestra.run.vm00.stdout:Setting up python3-pygments (2.11.2+dfsg-2ubuntu0.1) ... 2026-03-10T13:40:49.591 INFO:teuthology.orchestra.run.vm00.stdout:Setting up python3-tempita (0.5.2-6ubuntu1) ... 2026-03-10T13:40:49.656 INFO:teuthology.orchestra.run.vm08.stdout:nvmf-connect.target is a disabled or a static unit, not starting it. 2026-03-10T13:40:49.662 INFO:teuthology.orchestra.run.vm08.stdout:Could not execute systemctl: at /usr/bin/deb-systemd-invoke line 142. 2026-03-10T13:40:49.671 INFO:teuthology.orchestra.run.vm00.stdout:Setting up python-pastedeploy-tpl (2.1.1-1) ... 2026-03-10T13:40:49.671 INFO:teuthology.orchestra.run.vm08.stdout:Setting up cephadm (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T13:40:49.674 INFO:teuthology.orchestra.run.vm00.stdout:Setting up qttranslations5-l10n (5.15.3-1) ... 2026-03-10T13:40:49.676 INFO:teuthology.orchestra.run.vm00.stdout:Setting up python3-wcwidth (0.2.5+dfsg1-1) ... 2026-03-10T13:40:49.712 INFO:teuthology.orchestra.run.vm08.stdout:Adding system user cephadm....done 2026-03-10T13:40:49.721 INFO:teuthology.orchestra.run.vm08.stdout:Setting up python3-waitress (1.4.4-1.1ubuntu1.1) ... 2026-03-10T13:40:49.765 INFO:teuthology.orchestra.run.vm00.stdout:Setting up python3-asyncssh (2.5.0-1ubuntu0.1) ... 2026-03-10T13:40:49.794 INFO:teuthology.orchestra.run.vm08.stdout:Setting up python3-jaraco.classes (3.2.1-3) ... 2026-03-10T13:40:49.861 INFO:teuthology.orchestra.run.vm08.stdout:Setting up python-asyncssh-doc (2.5.0-1ubuntu0.1) ... 2026-03-10T13:40:49.863 INFO:teuthology.orchestra.run.vm08.stdout:Setting up python3-jaraco.functools (3.4.0-2) ... 2026-03-10T13:40:49.908 INFO:teuthology.orchestra.run.vm00.stdout:Setting up python3-paste (3.5.0+dfsg1-1) ... 2026-03-10T13:40:49.931 INFO:teuthology.orchestra.run.vm08.stdout:Setting up python3-repoze.lru (0.7-2) ... 2026-03-10T13:40:49.950 INFO:teuthology.orchestra.run.vm07.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-10T13:40:49.954 DEBUG:teuthology.parallel:result is None 2026-03-10T13:40:50.001 INFO:teuthology.orchestra.run.vm08.stdout:Setting up liboath0:amd64 (2.6.7-3ubuntu0.1) ... 2026-03-10T13:40:50.003 INFO:teuthology.orchestra.run.vm08.stdout:Setting up python3-py (1.10.0-1) ... 2026-03-10T13:40:50.044 INFO:teuthology.orchestra.run.vm00.stdout:Setting up python3-cheroot (8.5.2+ds1-1ubuntu3.1) ... 2026-03-10T13:40:50.093 INFO:teuthology.orchestra.run.vm08.stdout:Setting up python3-joblib (0.17.0-4ubuntu1) ... 2026-03-10T13:40:50.134 INFO:teuthology.orchestra.run.vm00.stdout:Setting up python3-werkzeug (2.0.2+dfsg1-1ubuntu0.22.04.3) ... 2026-03-10T13:40:50.222 INFO:teuthology.orchestra.run.vm08.stdout:Setting up python3-cachetools (5.0.0-1) ... 2026-03-10T13:40:50.254 INFO:teuthology.orchestra.run.vm00.stdout:Setting up python3-jaraco.text (3.6.0-2) ... 2026-03-10T13:40:50.292 INFO:teuthology.orchestra.run.vm08.stdout:Setting up unzip (6.0-26ubuntu3.2) ... 2026-03-10T13:40:50.301 INFO:teuthology.orchestra.run.vm08.stdout:Setting up python3-pyinotify (0.9.6-1.3) ... 2026-03-10T13:40:50.327 INFO:teuthology.orchestra.run.vm00.stdout:Setting up socat (1.7.4.1-3ubuntu4) ... 2026-03-10T13:40:50.330 INFO:teuthology.orchestra.run.vm00.stdout:Setting up python3-ceph-common (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T13:40:50.375 INFO:teuthology.orchestra.run.vm08.stdout:Setting up python3-threadpoolctl (3.1.0-1) ... 2026-03-10T13:40:50.422 INFO:teuthology.orchestra.run.vm00.stdout:Setting up python3-sklearn (0.23.2-5ubuntu6) ... 2026-03-10T13:40:50.449 INFO:teuthology.orchestra.run.vm08.stdout:Setting up python3-ceph-argparse (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T13:40:50.525 INFO:teuthology.orchestra.run.vm08.stdout:Setting up python3-sklearn-lib:amd64 (0.23.2-5ubuntu6) ... 2026-03-10T13:40:50.528 INFO:teuthology.orchestra.run.vm08.stdout:Setting up libnbd0 (1.10.5-1) ... 2026-03-10T13:40:50.530 INFO:teuthology.orchestra.run.vm08.stdout:Setting up lua-socket:amd64 (3.0~rc1+git+ac3201d-6) ... 2026-03-10T13:40:50.532 INFO:teuthology.orchestra.run.vm08.stdout:Setting up libreadline-dev:amd64 (8.1.2-1) ... 2026-03-10T13:40:50.535 INFO:teuthology.orchestra.run.vm08.stdout:Setting up libfuse2:amd64 (2.9.9-5ubuntu3) ... 2026-03-10T13:40:50.537 INFO:teuthology.orchestra.run.vm08.stdout:Setting up lua5.1 (5.1.5-8.1build4) ... 2026-03-10T13:40:50.540 INFO:teuthology.orchestra.run.vm08.stdout:update-alternatives: using /usr/bin/lua5.1 to provide /usr/bin/lua (lua-interpreter) in auto mode 2026-03-10T13:40:50.543 INFO:teuthology.orchestra.run.vm08.stdout:update-alternatives: using /usr/bin/luac5.1 to provide /usr/bin/luac (lua-compiler) in auto mode 2026-03-10T13:40:50.545 INFO:teuthology.orchestra.run.vm08.stdout:Setting up libpcre2-16-0:amd64 (10.39-3ubuntu0.1) ... 2026-03-10T13:40:50.547 INFO:teuthology.orchestra.run.vm08.stdout:Setting up python3-psutil (5.9.0-1build1) ... 2026-03-10T13:40:50.672 INFO:teuthology.orchestra.run.vm08.stdout:Setting up python3-natsort (8.0.2-1) ... 2026-03-10T13:40:50.748 INFO:teuthology.orchestra.run.vm08.stdout:Setting up python3-routes (2.5.1-1ubuntu1) ... 2026-03-10T13:40:50.820 INFO:teuthology.orchestra.run.vm08.stdout:Setting up python3-simplejson (3.17.6-1build1) ... 2026-03-10T13:40:50.913 INFO:teuthology.orchestra.run.vm08.stdout:Setting up zip (3.0-12build2) ... 2026-03-10T13:40:50.915 INFO:teuthology.orchestra.run.vm08.stdout:Setting up python3-pygments (2.11.2+dfsg-2ubuntu0.1) ... 2026-03-10T13:40:51.191 INFO:teuthology.orchestra.run.vm00.stdout:Setting up pkg-config (0.29.2-1ubuntu3) ... 2026-03-10T13:40:51.206 INFO:teuthology.orchestra.run.vm08.stdout:Setting up python3-tempita (0.5.2-6ubuntu1) ... 2026-03-10T13:40:51.212 INFO:teuthology.orchestra.run.vm00.stdout:Setting up libqt5core5a:amd64 (5.15.3+dfsg-2ubuntu0.2) ... 2026-03-10T13:40:51.216 INFO:teuthology.orchestra.run.vm00.stdout:Setting up python3-toml (0.10.2-1) ... 2026-03-10T13:40:51.276 INFO:teuthology.orchestra.run.vm08.stdout:Setting up python-pastedeploy-tpl (2.1.1-1) ... 2026-03-10T13:40:51.279 INFO:teuthology.orchestra.run.vm08.stdout:Setting up qttranslations5-l10n (5.15.3-1) ... 2026-03-10T13:40:51.281 INFO:teuthology.orchestra.run.vm08.stdout:Setting up python3-wcwidth (0.2.5+dfsg1-1) ... 2026-03-10T13:40:51.285 INFO:teuthology.orchestra.run.vm00.stdout:Setting up librdkafka1:amd64 (1.8.0-1build1) ... 2026-03-10T13:40:51.287 INFO:teuthology.orchestra.run.vm00.stdout:Setting up xmlstarlet (1.6.1-2.1) ... 2026-03-10T13:40:51.289 INFO:teuthology.orchestra.run.vm00.stdout:Setting up python3-pluggy (0.13.0-7.1) ... 2026-03-10T13:40:51.360 INFO:teuthology.orchestra.run.vm00.stdout:Setting up python3-zc.lockfile (2.0-1) ... 2026-03-10T13:40:51.371 INFO:teuthology.orchestra.run.vm08.stdout:Setting up python3-asyncssh (2.5.0-1ubuntu0.1) ... 2026-03-10T13:40:51.428 INFO:teuthology.orchestra.run.vm00.stdout:Setting up libqt5dbus5:amd64 (5.15.3+dfsg-2ubuntu0.2) ... 2026-03-10T13:40:51.430 INFO:teuthology.orchestra.run.vm00.stdout:Setting up python3-rsa (4.8-1) ... 2026-03-10T13:40:51.508 INFO:teuthology.orchestra.run.vm00.stdout:Setting up python3-singledispatch (3.4.0.3-3) ... 2026-03-10T13:40:51.512 INFO:teuthology.orchestra.run.vm08.stdout:Setting up python3-paste (3.5.0+dfsg1-1) ... 2026-03-10T13:40:51.574 INFO:teuthology.orchestra.run.vm00.stdout:Setting up python3-logutils (0.3.3-8) ... 2026-03-10T13:40:51.644 INFO:teuthology.orchestra.run.vm08.stdout:Setting up python3-cheroot (8.5.2+ds1-1ubuntu3.1) ... 2026-03-10T13:40:51.649 INFO:teuthology.orchestra.run.vm00.stdout:Setting up python3-tempora (4.1.2-1) ... 2026-03-10T13:40:51.719 INFO:teuthology.orchestra.run.vm00.stdout:Setting up python3-simplegeneric (0.8.1-3) ... 2026-03-10T13:40:51.735 INFO:teuthology.orchestra.run.vm08.stdout:Setting up python3-werkzeug (2.0.2+dfsg1-1ubuntu0.22.04.3) ... 2026-03-10T13:40:51.787 INFO:teuthology.orchestra.run.vm00.stdout:Setting up python3-prettytable (2.5.0-2) ... 2026-03-10T13:40:51.851 INFO:teuthology.orchestra.run.vm08.stdout:Setting up python3-jaraco.text (3.6.0-2) ... 2026-03-10T13:40:51.862 INFO:teuthology.orchestra.run.vm00.stdout:Setting up liblttng-ust1:amd64 (2.13.1-1ubuntu1) ... 2026-03-10T13:40:51.864 INFO:teuthology.orchestra.run.vm00.stdout:Setting up python3-websocket (1.2.3-1) ... 2026-03-10T13:40:51.918 INFO:teuthology.orchestra.run.vm08.stdout:Setting up socat (1.7.4.1-3ubuntu4) ... 2026-03-10T13:40:51.922 INFO:teuthology.orchestra.run.vm08.stdout:Setting up python3-ceph-common (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T13:40:51.944 INFO:teuthology.orchestra.run.vm00.stdout:Setting up libonig5:amd64 (6.9.7.1-2build1) ... 2026-03-10T13:40:51.946 INFO:teuthology.orchestra.run.vm00.stdout:Setting up python3-requests-oauthlib (1.3.0+ds-0.1) ... 2026-03-10T13:40:52.011 INFO:teuthology.orchestra.run.vm08.stdout:Setting up python3-sklearn (0.23.2-5ubuntu6) ... 2026-03-10T13:40:52.020 INFO:teuthology.orchestra.run.vm00.stdout:Setting up python3-mako (1.1.3+ds1-2ubuntu0.1) ... 2026-03-10T13:40:52.099 INFO:teuthology.orchestra.run.vm00.stdout:Setting up python3-webob (1:1.8.6-1.1ubuntu0.1) ... 2026-03-10T13:40:52.194 INFO:teuthology.orchestra.run.vm00.stdout:Setting up python3-jaraco.collections (3.4.0-2) ... 2026-03-10T13:40:52.260 INFO:teuthology.orchestra.run.vm00.stdout:Setting up liblua5.3-dev:amd64 (5.3.6-1build1) ... 2026-03-10T13:40:52.263 INFO:teuthology.orchestra.run.vm00.stdout:Setting up lua-sec:amd64 (1.0.2-1) ... 2026-03-10T13:40:52.265 INFO:teuthology.orchestra.run.vm00.stdout:Setting up libjq1:amd64 (1.6-2.1ubuntu3.1) ... 2026-03-10T13:40:52.267 INFO:teuthology.orchestra.run.vm00.stdout:Setting up python3-pytest (6.2.5-1ubuntu2) ... 2026-03-10T13:40:52.401 INFO:teuthology.orchestra.run.vm00.stdout:Setting up python3-pastedeploy (2.1.1-1) ... 2026-03-10T13:40:52.471 INFO:teuthology.orchestra.run.vm00.stdout:Setting up lua-any (27ubuntu1) ... 2026-03-10T13:40:52.473 INFO:teuthology.orchestra.run.vm00.stdout:Setting up python3-portend (3.0.0-1) ... 2026-03-10T13:40:52.544 INFO:teuthology.orchestra.run.vm00.stdout:Setting up libqt5network5:amd64 (5.15.3+dfsg-2ubuntu0.2) ... 2026-03-10T13:40:52.545 INFO:teuthology.orchestra.run.vm00.stdout:Setting up python3-google-auth (1.5.1-3) ... 2026-03-10T13:40:52.570 INFO:teuthology.orchestra.run.vm08.stdout:Setting up pkg-config (0.29.2-1ubuntu3) ... 2026-03-10T13:40:52.591 INFO:teuthology.orchestra.run.vm08.stdout:Setting up libqt5core5a:amd64 (5.15.3+dfsg-2ubuntu0.2) ... 2026-03-10T13:40:52.595 INFO:teuthology.orchestra.run.vm08.stdout:Setting up python3-toml (0.10.2-1) ... 2026-03-10T13:40:52.622 INFO:teuthology.orchestra.run.vm00.stdout:Setting up jq (1.6-2.1ubuntu3.1) ... 2026-03-10T13:40:52.624 INFO:teuthology.orchestra.run.vm00.stdout:Setting up python3-webtest (2.0.35-1) ... 2026-03-10T13:40:52.666 INFO:teuthology.orchestra.run.vm08.stdout:Setting up librdkafka1:amd64 (1.8.0-1build1) ... 2026-03-10T13:40:52.668 INFO:teuthology.orchestra.run.vm08.stdout:Setting up xmlstarlet (1.6.1-2.1) ... 2026-03-10T13:40:52.670 INFO:teuthology.orchestra.run.vm08.stdout:Setting up python3-pluggy (0.13.0-7.1) ... 2026-03-10T13:40:52.698 INFO:teuthology.orchestra.run.vm00.stdout:Setting up python3-cherrypy3 (18.6.1-4) ... 2026-03-10T13:40:52.736 INFO:teuthology.orchestra.run.vm08.stdout:Setting up python3-zc.lockfile (2.0-1) ... 2026-03-10T13:40:52.799 INFO:teuthology.orchestra.run.vm08.stdout:Setting up libqt5dbus5:amd64 (5.15.3+dfsg-2ubuntu0.2) ... 2026-03-10T13:40:52.801 INFO:teuthology.orchestra.run.vm08.stdout:Setting up python3-rsa (4.8-1) ... 2026-03-10T13:40:52.825 INFO:teuthology.orchestra.run.vm00.stdout:Setting up python3-pastescript (2.0.2-4) ... 2026-03-10T13:40:52.871 INFO:teuthology.orchestra.run.vm08.stdout:Setting up python3-singledispatch (3.4.0.3-3) ... 2026-03-10T13:40:52.911 INFO:teuthology.orchestra.run.vm00.stdout:Setting up python3-pecan (1.3.3-4ubuntu2) ... 2026-03-10T13:40:52.938 INFO:teuthology.orchestra.run.vm08.stdout:Setting up python3-logutils (0.3.3-8) ... 2026-03-10T13:40:53.007 INFO:teuthology.orchestra.run.vm08.stdout:Setting up python3-tempora (4.1.2-1) ... 2026-03-10T13:40:53.021 INFO:teuthology.orchestra.run.vm00.stdout:Setting up libthrift-0.16.0:amd64 (0.16.0-2) ... 2026-03-10T13:40:53.023 INFO:teuthology.orchestra.run.vm00.stdout:Setting up librados2 (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T13:40:53.025 INFO:teuthology.orchestra.run.vm00.stdout:Setting up libsqlite3-mod-ceph (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T13:40:53.027 INFO:teuthology.orchestra.run.vm00.stdout:Setting up python3-kubernetes (12.0.1-1ubuntu1) ... 2026-03-10T13:40:53.076 INFO:teuthology.orchestra.run.vm08.stdout:Setting up python3-simplegeneric (0.8.1-3) ... 2026-03-10T13:40:53.142 INFO:teuthology.orchestra.run.vm08.stdout:Setting up python3-prettytable (2.5.0-2) ... 2026-03-10T13:40:53.213 INFO:teuthology.orchestra.run.vm08.stdout:Setting up liblttng-ust1:amd64 (2.13.1-1ubuntu1) ... 2026-03-10T13:40:53.215 INFO:teuthology.orchestra.run.vm08.stdout:Setting up python3-websocket (1.2.3-1) ... 2026-03-10T13:40:53.298 INFO:teuthology.orchestra.run.vm08.stdout:Setting up libonig5:amd64 (6.9.7.1-2build1) ... 2026-03-10T13:40:53.300 INFO:teuthology.orchestra.run.vm08.stdout:Setting up python3-requests-oauthlib (1.3.0+ds-0.1) ... 2026-03-10T13:40:53.370 INFO:teuthology.orchestra.run.vm08.stdout:Setting up python3-mako (1.1.3+ds1-2ubuntu0.1) ... 2026-03-10T13:40:53.459 INFO:teuthology.orchestra.run.vm08.stdout:Setting up python3-webob (1:1.8.6-1.1ubuntu0.1) ... 2026-03-10T13:40:53.548 INFO:teuthology.orchestra.run.vm08.stdout:Setting up python3-jaraco.collections (3.4.0-2) ... 2026-03-10T13:40:53.598 INFO:teuthology.orchestra.run.vm00.stdout:Setting up luarocks (3.8.0+dfsg1-1) ... 2026-03-10T13:40:53.604 INFO:teuthology.orchestra.run.vm00.stdout:Setting up libcephfs2 (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T13:40:53.607 INFO:teuthology.orchestra.run.vm00.stdout:Setting up libradosstriper1 (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T13:40:53.609 INFO:teuthology.orchestra.run.vm00.stdout:Setting up librbd1 (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T13:40:53.611 INFO:teuthology.orchestra.run.vm00.stdout:Setting up ceph-mgr-modules-core (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T13:40:53.613 INFO:teuthology.orchestra.run.vm00.stdout:Setting up ceph-fuse (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T13:40:53.615 INFO:teuthology.orchestra.run.vm08.stdout:Setting up liblua5.3-dev:amd64 (5.3.6-1build1) ... 2026-03-10T13:40:53.617 INFO:teuthology.orchestra.run.vm08.stdout:Setting up lua-sec:amd64 (1.0.2-1) ... 2026-03-10T13:40:53.619 INFO:teuthology.orchestra.run.vm08.stdout:Setting up libjq1:amd64 (1.6-2.1ubuntu3.1) ... 2026-03-10T13:40:53.621 INFO:teuthology.orchestra.run.vm08.stdout:Setting up python3-pytest (6.2.5-1ubuntu2) ... 2026-03-10T13:40:53.671 INFO:teuthology.orchestra.run.vm00.stdout:Created symlink /etc/systemd/system/remote-fs.target.wants/ceph-fuse.target → /lib/systemd/system/ceph-fuse.target. 2026-03-10T13:40:53.671 INFO:teuthology.orchestra.run.vm00.stdout:Created symlink /etc/systemd/system/ceph.target.wants/ceph-fuse.target → /lib/systemd/system/ceph-fuse.target. 2026-03-10T13:40:53.754 INFO:teuthology.orchestra.run.vm08.stdout:Setting up python3-pastedeploy (2.1.1-1) ... 2026-03-10T13:40:53.823 INFO:teuthology.orchestra.run.vm08.stdout:Setting up lua-any (27ubuntu1) ... 2026-03-10T13:40:53.825 INFO:teuthology.orchestra.run.vm08.stdout:Setting up python3-portend (3.0.0-1) ... 2026-03-10T13:40:53.890 INFO:teuthology.orchestra.run.vm08.stdout:Setting up libqt5network5:amd64 (5.15.3+dfsg-2ubuntu0.2) ... 2026-03-10T13:40:53.893 INFO:teuthology.orchestra.run.vm08.stdout:Setting up python3-google-auth (1.5.1-3) ... 2026-03-10T13:40:53.971 INFO:teuthology.orchestra.run.vm08.stdout:Setting up jq (1.6-2.1ubuntu3.1) ... 2026-03-10T13:40:53.973 INFO:teuthology.orchestra.run.vm08.stdout:Setting up python3-webtest (2.0.35-1) ... 2026-03-10T13:40:54.004 INFO:teuthology.orchestra.run.vm00.stdout:Setting up libcephfs-dev (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T13:40:54.006 INFO:teuthology.orchestra.run.vm00.stdout:Setting up python3-rados (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T13:40:54.008 INFO:teuthology.orchestra.run.vm00.stdout:Setting up librgw2 (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T13:40:54.010 INFO:teuthology.orchestra.run.vm00.stdout:Setting up python3-rbd (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T13:40:54.012 INFO:teuthology.orchestra.run.vm00.stdout:Setting up rbd-fuse (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T13:40:54.014 INFO:teuthology.orchestra.run.vm00.stdout:Setting up python3-rgw (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T13:40:54.016 INFO:teuthology.orchestra.run.vm00.stdout:Setting up python3-cephfs (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T13:40:54.019 INFO:teuthology.orchestra.run.vm00.stdout:Setting up ceph-common (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T13:40:54.048 INFO:teuthology.orchestra.run.vm08.stdout:Setting up python3-cherrypy3 (18.6.1-4) ... 2026-03-10T13:40:54.052 INFO:teuthology.orchestra.run.vm00.stdout:Adding group ceph....done 2026-03-10T13:40:54.087 INFO:teuthology.orchestra.run.vm00.stdout:Adding system user ceph....done 2026-03-10T13:40:54.095 INFO:teuthology.orchestra.run.vm00.stdout:Setting system user ceph properties....done 2026-03-10T13:40:54.099 INFO:teuthology.orchestra.run.vm00.stdout:chown: cannot access '/var/log/ceph/*.log*': No such file or directory 2026-03-10T13:40:54.167 INFO:teuthology.orchestra.run.vm00.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/ceph.target → /lib/systemd/system/ceph.target. 2026-03-10T13:40:54.395 INFO:teuthology.orchestra.run.vm00.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/rbdmap.service → /lib/systemd/system/rbdmap.service. 2026-03-10T13:40:54.423 INFO:teuthology.orchestra.run.vm08.stdout:Setting up python3-pastescript (2.0.2-4) ... 2026-03-10T13:40:54.515 INFO:teuthology.orchestra.run.vm08.stdout:Setting up python3-pecan (1.3.3-4ubuntu2) ... 2026-03-10T13:40:54.624 INFO:teuthology.orchestra.run.vm08.stdout:Setting up libthrift-0.16.0:amd64 (0.16.0-2) ... 2026-03-10T13:40:54.626 INFO:teuthology.orchestra.run.vm08.stdout:Setting up librados2 (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T13:40:54.629 INFO:teuthology.orchestra.run.vm08.stdout:Setting up libsqlite3-mod-ceph (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T13:40:54.631 INFO:teuthology.orchestra.run.vm08.stdout:Setting up python3-kubernetes (12.0.1-1ubuntu1) ... 2026-03-10T13:40:54.755 INFO:teuthology.orchestra.run.vm00.stdout:Setting up ceph-test (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T13:40:54.757 INFO:teuthology.orchestra.run.vm00.stdout:Setting up radosgw (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T13:40:54.983 INFO:teuthology.orchestra.run.vm00.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/ceph-radosgw.target → /lib/systemd/system/ceph-radosgw.target. 2026-03-10T13:40:54.983 INFO:teuthology.orchestra.run.vm00.stdout:Created symlink /etc/systemd/system/ceph.target.wants/ceph-radosgw.target → /lib/systemd/system/ceph-radosgw.target. 2026-03-10T13:40:55.214 INFO:teuthology.orchestra.run.vm08.stdout:Setting up luarocks (3.8.0+dfsg1-1) ... 2026-03-10T13:40:55.222 INFO:teuthology.orchestra.run.vm08.stdout:Setting up libcephfs2 (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T13:40:55.224 INFO:teuthology.orchestra.run.vm08.stdout:Setting up libradosstriper1 (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T13:40:55.227 INFO:teuthology.orchestra.run.vm08.stdout:Setting up librbd1 (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T13:40:55.229 INFO:teuthology.orchestra.run.vm08.stdout:Setting up ceph-mgr-modules-core (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T13:40:55.231 INFO:teuthology.orchestra.run.vm08.stdout:Setting up ceph-fuse (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T13:40:55.291 INFO:teuthology.orchestra.run.vm08.stdout:Created symlink /etc/systemd/system/remote-fs.target.wants/ceph-fuse.target → /lib/systemd/system/ceph-fuse.target. 2026-03-10T13:40:55.291 INFO:teuthology.orchestra.run.vm08.stdout:Created symlink /etc/systemd/system/ceph.target.wants/ceph-fuse.target → /lib/systemd/system/ceph-fuse.target. 2026-03-10T13:40:55.323 INFO:teuthology.orchestra.run.vm00.stdout:Setting up ceph-base (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T13:40:55.410 INFO:teuthology.orchestra.run.vm00.stdout:Created symlink /etc/systemd/system/ceph.target.wants/ceph-crash.service → /lib/systemd/system/ceph-crash.service. 2026-03-10T13:40:55.631 INFO:teuthology.orchestra.run.vm08.stdout:Setting up libcephfs-dev (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T13:40:55.633 INFO:teuthology.orchestra.run.vm08.stdout:Setting up python3-rados (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T13:40:55.635 INFO:teuthology.orchestra.run.vm08.stdout:Setting up librgw2 (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T13:40:55.637 INFO:teuthology.orchestra.run.vm08.stdout:Setting up python3-rbd (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T13:40:55.639 INFO:teuthology.orchestra.run.vm08.stdout:Setting up rbd-fuse (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T13:40:55.641 INFO:teuthology.orchestra.run.vm08.stdout:Setting up python3-rgw (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T13:40:55.643 INFO:teuthology.orchestra.run.vm08.stdout:Setting up python3-cephfs (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T13:40:55.645 INFO:teuthology.orchestra.run.vm08.stdout:Setting up ceph-common (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T13:40:55.675 INFO:teuthology.orchestra.run.vm08.stdout:Adding group ceph....done 2026-03-10T13:40:55.710 INFO:teuthology.orchestra.run.vm08.stdout:Adding system user ceph....done 2026-03-10T13:40:55.717 INFO:teuthology.orchestra.run.vm08.stdout:Setting system user ceph properties....done 2026-03-10T13:40:55.721 INFO:teuthology.orchestra.run.vm08.stdout:chown: cannot access '/var/log/ceph/*.log*': No such file or directory 2026-03-10T13:40:55.756 INFO:teuthology.orchestra.run.vm00.stdout:Setting up ceph-mds (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T13:40:55.784 INFO:teuthology.orchestra.run.vm08.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/ceph.target → /lib/systemd/system/ceph.target. 2026-03-10T13:40:55.826 INFO:teuthology.orchestra.run.vm00.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/ceph-mds.target → /lib/systemd/system/ceph-mds.target. 2026-03-10T13:40:55.826 INFO:teuthology.orchestra.run.vm00.stdout:Created symlink /etc/systemd/system/ceph.target.wants/ceph-mds.target → /lib/systemd/system/ceph-mds.target. 2026-03-10T13:40:55.983 INFO:teuthology.orchestra.run.vm08.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/rbdmap.service → /lib/systemd/system/rbdmap.service. 2026-03-10T13:40:56.206 INFO:teuthology.orchestra.run.vm00.stdout:Setting up ceph-mgr (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T13:40:56.268 INFO:teuthology.orchestra.run.vm00.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/ceph-mgr.target → /lib/systemd/system/ceph-mgr.target. 2026-03-10T13:40:56.268 INFO:teuthology.orchestra.run.vm00.stdout:Created symlink /etc/systemd/system/ceph.target.wants/ceph-mgr.target → /lib/systemd/system/ceph-mgr.target. 2026-03-10T13:40:56.339 INFO:teuthology.orchestra.run.vm08.stdout:Setting up ceph-test (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T13:40:56.341 INFO:teuthology.orchestra.run.vm08.stdout:Setting up radosgw (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T13:40:56.578 INFO:teuthology.orchestra.run.vm08.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/ceph-radosgw.target → /lib/systemd/system/ceph-radosgw.target. 2026-03-10T13:40:56.578 INFO:teuthology.orchestra.run.vm08.stdout:Created symlink /etc/systemd/system/ceph.target.wants/ceph-radosgw.target → /lib/systemd/system/ceph-radosgw.target. 2026-03-10T13:40:56.651 INFO:teuthology.orchestra.run.vm00.stdout:Setting up ceph-osd (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T13:40:56.725 INFO:teuthology.orchestra.run.vm00.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/ceph-osd.target → /lib/systemd/system/ceph-osd.target. 2026-03-10T13:40:56.725 INFO:teuthology.orchestra.run.vm00.stdout:Created symlink /etc/systemd/system/ceph.target.wants/ceph-osd.target → /lib/systemd/system/ceph-osd.target. 2026-03-10T13:40:56.918 INFO:teuthology.orchestra.run.vm08.stdout:Setting up ceph-base (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T13:40:57.000 INFO:teuthology.orchestra.run.vm08.stdout:Created symlink /etc/systemd/system/ceph.target.wants/ceph-crash.service → /lib/systemd/system/ceph-crash.service. 2026-03-10T13:40:57.095 INFO:teuthology.orchestra.run.vm00.stdout:Setting up ceph-mgr-k8sevents (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T13:40:57.098 INFO:teuthology.orchestra.run.vm00.stdout:Setting up ceph-mgr-diskprediction-local (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T13:40:57.111 INFO:teuthology.orchestra.run.vm00.stdout:Setting up ceph-mon (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T13:40:57.170 INFO:teuthology.orchestra.run.vm00.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/ceph-mon.target → /lib/systemd/system/ceph-mon.target. 2026-03-10T13:40:57.170 INFO:teuthology.orchestra.run.vm00.stdout:Created symlink /etc/systemd/system/ceph.target.wants/ceph-mon.target → /lib/systemd/system/ceph-mon.target. 2026-03-10T13:40:57.340 INFO:teuthology.orchestra.run.vm08.stdout:Setting up ceph-mds (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T13:40:57.406 INFO:teuthology.orchestra.run.vm08.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/ceph-mds.target → /lib/systemd/system/ceph-mds.target. 2026-03-10T13:40:57.406 INFO:teuthology.orchestra.run.vm08.stdout:Created symlink /etc/systemd/system/ceph.target.wants/ceph-mds.target → /lib/systemd/system/ceph-mds.target. 2026-03-10T13:40:57.552 INFO:teuthology.orchestra.run.vm00.stdout:Setting up ceph-mgr-cephadm (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T13:40:57.564 INFO:teuthology.orchestra.run.vm00.stdout:Setting up ceph (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T13:40:57.567 INFO:teuthology.orchestra.run.vm00.stdout:Setting up ceph-mgr-dashboard (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T13:40:57.579 INFO:teuthology.orchestra.run.vm00.stdout:Setting up ceph-volume (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T13:40:57.693 INFO:teuthology.orchestra.run.vm00.stdout:Processing triggers for mailcap (3.70+nmu1ubuntu1) ... 2026-03-10T13:40:57.700 INFO:teuthology.orchestra.run.vm00.stdout:Processing triggers for libc-bin (2.35-0ubuntu3.13) ... 2026-03-10T13:40:57.715 INFO:teuthology.orchestra.run.vm00.stdout:Processing triggers for man-db (2.10.2-1) ... 2026-03-10T13:40:57.859 INFO:teuthology.orchestra.run.vm08.stdout:Setting up ceph-mgr (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T13:40:57.882 INFO:teuthology.orchestra.run.vm00.stdout:Processing triggers for install-info (6.8-4build1) ... 2026-03-10T13:40:57.918 INFO:teuthology.orchestra.run.vm08.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/ceph-mgr.target → /lib/systemd/system/ceph-mgr.target. 2026-03-10T13:40:57.918 INFO:teuthology.orchestra.run.vm08.stdout:Created symlink /etc/systemd/system/ceph.target.wants/ceph-mgr.target → /lib/systemd/system/ceph-mgr.target. 2026-03-10T13:40:58.200 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-10T13:40:58.200 INFO:teuthology.orchestra.run.vm00.stdout:Running kernel seems to be up-to-date. 2026-03-10T13:40:58.200 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-10T13:40:58.200 INFO:teuthology.orchestra.run.vm00.stdout:Services to be restarted: 2026-03-10T13:40:58.205 INFO:teuthology.orchestra.run.vm00.stdout: systemctl restart packagekit.service 2026-03-10T13:40:58.208 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-10T13:40:58.208 INFO:teuthology.orchestra.run.vm00.stdout:Service restarts being deferred: 2026-03-10T13:40:58.208 INFO:teuthology.orchestra.run.vm00.stdout: systemctl restart networkd-dispatcher.service 2026-03-10T13:40:58.208 INFO:teuthology.orchestra.run.vm00.stdout: systemctl restart unattended-upgrades.service 2026-03-10T13:40:58.208 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-10T13:40:58.208 INFO:teuthology.orchestra.run.vm00.stdout:No containers need to be restarted. 2026-03-10T13:40:58.208 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-10T13:40:58.208 INFO:teuthology.orchestra.run.vm00.stdout:No user sessions are running outdated binaries. 2026-03-10T13:40:58.208 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-10T13:40:58.208 INFO:teuthology.orchestra.run.vm00.stdout:No VM guests are running outdated hypervisor (qemu) binaries on this host. 2026-03-10T13:40:58.248 INFO:teuthology.orchestra.run.vm08.stdout:Setting up ceph-osd (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T13:40:58.329 INFO:teuthology.orchestra.run.vm08.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/ceph-osd.target → /lib/systemd/system/ceph-osd.target. 2026-03-10T13:40:58.329 INFO:teuthology.orchestra.run.vm08.stdout:Created symlink /etc/systemd/system/ceph.target.wants/ceph-osd.target → /lib/systemd/system/ceph-osd.target. 2026-03-10T13:40:58.668 INFO:teuthology.orchestra.run.vm08.stdout:Setting up ceph-mgr-k8sevents (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T13:40:58.670 INFO:teuthology.orchestra.run.vm08.stdout:Setting up ceph-mgr-diskprediction-local (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T13:40:58.683 INFO:teuthology.orchestra.run.vm08.stdout:Setting up ceph-mon (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T13:40:58.741 INFO:teuthology.orchestra.run.vm08.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/ceph-mon.target → /lib/systemd/system/ceph-mon.target. 2026-03-10T13:40:58.741 INFO:teuthology.orchestra.run.vm08.stdout:Created symlink /etc/systemd/system/ceph.target.wants/ceph-mon.target → /lib/systemd/system/ceph-mon.target. 2026-03-10T13:40:59.059 INFO:teuthology.orchestra.run.vm08.stdout:Setting up ceph-mgr-cephadm (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T13:40:59.072 INFO:teuthology.orchestra.run.vm08.stdout:Setting up ceph (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T13:40:59.074 INFO:teuthology.orchestra.run.vm08.stdout:Setting up ceph-mgr-dashboard (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T13:40:59.087 INFO:teuthology.orchestra.run.vm08.stdout:Setting up ceph-volume (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T13:40:59.161 INFO:teuthology.orchestra.run.vm00.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-10T13:40:59.163 DEBUG:teuthology.orchestra.run.vm00:> sudo DEBIAN_FRONTEND=noninteractive apt-get -y --force-yes -o Dpkg::Options::="--force-confdef" -o Dpkg::Options::="--force-confold" install python3-xmltodict python3-jmespath 2026-03-10T13:40:59.208 INFO:teuthology.orchestra.run.vm08.stdout:Processing triggers for mailcap (3.70+nmu1ubuntu1) ... 2026-03-10T13:40:59.216 INFO:teuthology.orchestra.run.vm08.stdout:Processing triggers for libc-bin (2.35-0ubuntu3.13) ... 2026-03-10T13:40:59.230 INFO:teuthology.orchestra.run.vm08.stdout:Processing triggers for man-db (2.10.2-1) ... 2026-03-10T13:40:59.240 INFO:teuthology.orchestra.run.vm00.stdout:Reading package lists... 2026-03-10T13:40:59.307 INFO:teuthology.orchestra.run.vm08.stdout:Processing triggers for install-info (6.8-4build1) ... 2026-03-10T13:40:59.470 INFO:teuthology.orchestra.run.vm00.stdout:Building dependency tree... 2026-03-10T13:40:59.470 INFO:teuthology.orchestra.run.vm00.stdout:Reading state information... 2026-03-10T13:40:59.628 INFO:teuthology.orchestra.run.vm08.stdout: 2026-03-10T13:40:59.628 INFO:teuthology.orchestra.run.vm08.stdout:Running kernel seems to be up-to-date. 2026-03-10T13:40:59.628 INFO:teuthology.orchestra.run.vm08.stdout: 2026-03-10T13:40:59.628 INFO:teuthology.orchestra.run.vm08.stdout:Services to be restarted: 2026-03-10T13:40:59.631 INFO:teuthology.orchestra.run.vm00.stdout:The following packages were automatically installed and are no longer required: 2026-03-10T13:40:59.631 INFO:teuthology.orchestra.run.vm00.stdout: kpartx libboost-iostreams1.74.0 libboost-thread1.74.0 libpmemobj1 2026-03-10T13:40:59.631 INFO:teuthology.orchestra.run.vm00.stdout: libsgutils2-2 sg3-utils sg3-utils-udev 2026-03-10T13:40:59.632 INFO:teuthology.orchestra.run.vm00.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-10T13:40:59.634 INFO:teuthology.orchestra.run.vm08.stdout: systemctl restart packagekit.service 2026-03-10T13:40:59.637 INFO:teuthology.orchestra.run.vm08.stdout: 2026-03-10T13:40:59.638 INFO:teuthology.orchestra.run.vm08.stdout:Service restarts being deferred: 2026-03-10T13:40:59.638 INFO:teuthology.orchestra.run.vm08.stdout: systemctl restart networkd-dispatcher.service 2026-03-10T13:40:59.638 INFO:teuthology.orchestra.run.vm08.stdout: systemctl restart unattended-upgrades.service 2026-03-10T13:40:59.638 INFO:teuthology.orchestra.run.vm08.stdout: 2026-03-10T13:40:59.638 INFO:teuthology.orchestra.run.vm08.stdout:No containers need to be restarted. 2026-03-10T13:40:59.638 INFO:teuthology.orchestra.run.vm08.stdout: 2026-03-10T13:40:59.638 INFO:teuthology.orchestra.run.vm08.stdout:No user sessions are running outdated binaries. 2026-03-10T13:40:59.638 INFO:teuthology.orchestra.run.vm08.stdout: 2026-03-10T13:40:59.638 INFO:teuthology.orchestra.run.vm08.stdout:No VM guests are running outdated hypervisor (qemu) binaries on this host. 2026-03-10T13:40:59.653 INFO:teuthology.orchestra.run.vm00.stdout:The following NEW packages will be installed: 2026-03-10T13:40:59.654 INFO:teuthology.orchestra.run.vm00.stdout: python3-jmespath python3-xmltodict 2026-03-10T13:40:59.860 INFO:teuthology.orchestra.run.vm00.stdout:0 upgraded, 2 newly installed, 0 to remove and 12 not upgraded. 2026-03-10T13:40:59.861 INFO:teuthology.orchestra.run.vm00.stdout:Need to get 34.3 kB of archives. 2026-03-10T13:40:59.861 INFO:teuthology.orchestra.run.vm00.stdout:After this operation, 146 kB of additional disk space will be used. 2026-03-10T13:40:59.861 INFO:teuthology.orchestra.run.vm00.stdout:Get:1 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-jmespath all 0.10.0-1 [21.7 kB] 2026-03-10T13:40:59.942 INFO:teuthology.orchestra.run.vm00.stdout:Get:2 https://archive.ubuntu.com/ubuntu jammy/universe amd64 python3-xmltodict all 0.12.0-2 [12.6 kB] 2026-03-10T13:41:00.150 INFO:teuthology.orchestra.run.vm00.stdout:Fetched 34.3 kB in 0s (118 kB/s) 2026-03-10T13:41:00.163 INFO:teuthology.orchestra.run.vm00.stdout:Selecting previously unselected package python3-jmespath. 2026-03-10T13:41:00.195 INFO:teuthology.orchestra.run.vm00.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 118577 files and directories currently installed.) 2026-03-10T13:41:00.198 INFO:teuthology.orchestra.run.vm00.stdout:Preparing to unpack .../python3-jmespath_0.10.0-1_all.deb ... 2026-03-10T13:41:00.199 INFO:teuthology.orchestra.run.vm00.stdout:Unpacking python3-jmespath (0.10.0-1) ... 2026-03-10T13:41:00.214 INFO:teuthology.orchestra.run.vm00.stdout:Selecting previously unselected package python3-xmltodict. 2026-03-10T13:41:00.220 INFO:teuthology.orchestra.run.vm00.stdout:Preparing to unpack .../python3-xmltodict_0.12.0-2_all.deb ... 2026-03-10T13:41:00.221 INFO:teuthology.orchestra.run.vm00.stdout:Unpacking python3-xmltodict (0.12.0-2) ... 2026-03-10T13:41:00.247 INFO:teuthology.orchestra.run.vm00.stdout:Setting up python3-xmltodict (0.12.0-2) ... 2026-03-10T13:41:00.336 INFO:teuthology.orchestra.run.vm00.stdout:Setting up python3-jmespath (0.10.0-1) ... 2026-03-10T13:41:00.671 INFO:teuthology.orchestra.run.vm08.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-10T13:41:00.674 DEBUG:teuthology.orchestra.run.vm08:> sudo DEBIAN_FRONTEND=noninteractive apt-get -y --force-yes -o Dpkg::Options::="--force-confdef" -o Dpkg::Options::="--force-confold" install python3-xmltodict python3-jmespath 2026-03-10T13:41:00.675 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-10T13:41:00.675 INFO:teuthology.orchestra.run.vm00.stdout:Running kernel seems to be up-to-date. 2026-03-10T13:41:00.675 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-10T13:41:00.675 INFO:teuthology.orchestra.run.vm00.stdout:Services to be restarted: 2026-03-10T13:41:00.680 INFO:teuthology.orchestra.run.vm00.stdout: systemctl restart packagekit.service 2026-03-10T13:41:00.683 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-10T13:41:00.683 INFO:teuthology.orchestra.run.vm00.stdout:Service restarts being deferred: 2026-03-10T13:41:00.683 INFO:teuthology.orchestra.run.vm00.stdout: systemctl restart networkd-dispatcher.service 2026-03-10T13:41:00.683 INFO:teuthology.orchestra.run.vm00.stdout: systemctl restart unattended-upgrades.service 2026-03-10T13:41:00.683 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-10T13:41:00.683 INFO:teuthology.orchestra.run.vm00.stdout:No containers need to be restarted. 2026-03-10T13:41:00.683 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-10T13:41:00.683 INFO:teuthology.orchestra.run.vm00.stdout:No user sessions are running outdated binaries. 2026-03-10T13:41:00.683 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-10T13:41:00.683 INFO:teuthology.orchestra.run.vm00.stdout:No VM guests are running outdated hypervisor (qemu) binaries on this host. 2026-03-10T13:41:00.750 INFO:teuthology.orchestra.run.vm08.stdout:Reading package lists... 2026-03-10T13:41:00.946 INFO:teuthology.orchestra.run.vm08.stdout:Building dependency tree... 2026-03-10T13:41:00.947 INFO:teuthology.orchestra.run.vm08.stdout:Reading state information... 2026-03-10T13:41:01.158 INFO:teuthology.orchestra.run.vm08.stdout:The following packages were automatically installed and are no longer required: 2026-03-10T13:41:01.159 INFO:teuthology.orchestra.run.vm08.stdout: kpartx libboost-iostreams1.74.0 libboost-thread1.74.0 libpmemobj1 2026-03-10T13:41:01.159 INFO:teuthology.orchestra.run.vm08.stdout: libsgutils2-2 sg3-utils sg3-utils-udev 2026-03-10T13:41:01.159 INFO:teuthology.orchestra.run.vm08.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-10T13:41:01.175 INFO:teuthology.orchestra.run.vm08.stdout:The following NEW packages will be installed: 2026-03-10T13:41:01.175 INFO:teuthology.orchestra.run.vm08.stdout: python3-jmespath python3-xmltodict 2026-03-10T13:41:01.262 INFO:teuthology.orchestra.run.vm08.stdout:0 upgraded, 2 newly installed, 0 to remove and 12 not upgraded. 2026-03-10T13:41:01.262 INFO:teuthology.orchestra.run.vm08.stdout:Need to get 34.3 kB of archives. 2026-03-10T13:41:01.262 INFO:teuthology.orchestra.run.vm08.stdout:After this operation, 146 kB of additional disk space will be used. 2026-03-10T13:41:01.262 INFO:teuthology.orchestra.run.vm08.stdout:Get:1 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-jmespath all 0.10.0-1 [21.7 kB] 2026-03-10T13:41:01.281 INFO:teuthology.orchestra.run.vm08.stdout:Get:2 https://archive.ubuntu.com/ubuntu jammy/universe amd64 python3-xmltodict all 0.12.0-2 [12.6 kB] 2026-03-10T13:41:01.478 INFO:teuthology.orchestra.run.vm08.stdout:Fetched 34.3 kB in 0s (335 kB/s) 2026-03-10T13:41:01.494 INFO:teuthology.orchestra.run.vm08.stdout:Selecting previously unselected package python3-jmespath. 2026-03-10T13:41:01.527 INFO:teuthology.orchestra.run.vm08.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 118577 files and directories currently installed.) 2026-03-10T13:41:01.529 INFO:teuthology.orchestra.run.vm08.stdout:Preparing to unpack .../python3-jmespath_0.10.0-1_all.deb ... 2026-03-10T13:41:01.531 INFO:teuthology.orchestra.run.vm08.stdout:Unpacking python3-jmespath (0.10.0-1) ... 2026-03-10T13:41:01.547 INFO:teuthology.orchestra.run.vm08.stdout:Selecting previously unselected package python3-xmltodict. 2026-03-10T13:41:01.552 INFO:teuthology.orchestra.run.vm08.stdout:Preparing to unpack .../python3-xmltodict_0.12.0-2_all.deb ... 2026-03-10T13:41:01.553 INFO:teuthology.orchestra.run.vm08.stdout:Unpacking python3-xmltodict (0.12.0-2) ... 2026-03-10T13:41:01.587 INFO:teuthology.orchestra.run.vm08.stdout:Setting up python3-xmltodict (0.12.0-2) ... 2026-03-10T13:41:01.677 INFO:teuthology.orchestra.run.vm00.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-10T13:41:01.681 DEBUG:teuthology.parallel:result is None 2026-03-10T13:41:01.707 INFO:teuthology.orchestra.run.vm08.stdout:Setting up python3-jmespath (0.10.0-1) ... 2026-03-10T13:41:02.035 INFO:teuthology.orchestra.run.vm08.stdout: 2026-03-10T13:41:02.035 INFO:teuthology.orchestra.run.vm08.stdout:Running kernel seems to be up-to-date. 2026-03-10T13:41:02.035 INFO:teuthology.orchestra.run.vm08.stdout: 2026-03-10T13:41:02.035 INFO:teuthology.orchestra.run.vm08.stdout:Services to be restarted: 2026-03-10T13:41:02.040 INFO:teuthology.orchestra.run.vm08.stdout: systemctl restart packagekit.service 2026-03-10T13:41:02.043 INFO:teuthology.orchestra.run.vm08.stdout: 2026-03-10T13:41:02.043 INFO:teuthology.orchestra.run.vm08.stdout:Service restarts being deferred: 2026-03-10T13:41:02.043 INFO:teuthology.orchestra.run.vm08.stdout: systemctl restart networkd-dispatcher.service 2026-03-10T13:41:02.043 INFO:teuthology.orchestra.run.vm08.stdout: systemctl restart unattended-upgrades.service 2026-03-10T13:41:02.043 INFO:teuthology.orchestra.run.vm08.stdout: 2026-03-10T13:41:02.043 INFO:teuthology.orchestra.run.vm08.stdout:No containers need to be restarted. 2026-03-10T13:41:02.043 INFO:teuthology.orchestra.run.vm08.stdout: 2026-03-10T13:41:02.043 INFO:teuthology.orchestra.run.vm08.stdout:No user sessions are running outdated binaries. 2026-03-10T13:41:02.043 INFO:teuthology.orchestra.run.vm08.stdout: 2026-03-10T13:41:02.043 INFO:teuthology.orchestra.run.vm08.stdout:No VM guests are running outdated hypervisor (qemu) binaries on this host. 2026-03-10T13:41:02.953 INFO:teuthology.orchestra.run.vm08.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-10T13:41:02.957 DEBUG:teuthology.parallel:result is None 2026-03-10T13:41:02.957 DEBUG:teuthology.packaging:Querying https://shaman.ceph.com/api/search?status=ready&project=ceph&flavor=default&distros=ubuntu%2F22.04%2Fx86_64&sha1=e911bdebe5c8faa3800735d1568fcdca65db60df 2026-03-10T13:41:03.641 DEBUG:teuthology.orchestra.run.vm00:> dpkg-query -W -f '${Version}' ceph 2026-03-10T13:41:03.649 INFO:teuthology.orchestra.run.vm00.stdout:19.2.3-678-ge911bdeb-1jammy 2026-03-10T13:41:03.650 INFO:teuthology.packaging:The installed version of ceph is 19.2.3-678-ge911bdeb-1jammy 2026-03-10T13:41:03.650 INFO:teuthology.task.install:The correct ceph version 19.2.3-678-ge911bdeb-1jammy is installed. 2026-03-10T13:41:03.651 DEBUG:teuthology.packaging:Querying https://shaman.ceph.com/api/search?status=ready&project=ceph&flavor=default&distros=ubuntu%2F22.04%2Fx86_64&sha1=e911bdebe5c8faa3800735d1568fcdca65db60df 2026-03-10T13:41:04.255 DEBUG:teuthology.orchestra.run.vm07:> dpkg-query -W -f '${Version}' ceph 2026-03-10T13:41:04.264 INFO:teuthology.orchestra.run.vm07.stdout:19.2.3-678-ge911bdeb-1jammy 2026-03-10T13:41:04.264 INFO:teuthology.packaging:The installed version of ceph is 19.2.3-678-ge911bdeb-1jammy 2026-03-10T13:41:04.264 INFO:teuthology.task.install:The correct ceph version 19.2.3-678-ge911bdeb-1jammy is installed. 2026-03-10T13:41:04.265 DEBUG:teuthology.packaging:Querying https://shaman.ceph.com/api/search?status=ready&project=ceph&flavor=default&distros=ubuntu%2F22.04%2Fx86_64&sha1=e911bdebe5c8faa3800735d1568fcdca65db60df 2026-03-10T13:41:04.880 DEBUG:teuthology.orchestra.run.vm08:> dpkg-query -W -f '${Version}' ceph 2026-03-10T13:41:04.888 INFO:teuthology.orchestra.run.vm08.stdout:19.2.3-678-ge911bdeb-1jammy 2026-03-10T13:41:04.889 INFO:teuthology.packaging:The installed version of ceph is 19.2.3-678-ge911bdeb-1jammy 2026-03-10T13:41:04.889 INFO:teuthology.task.install:The correct ceph version 19.2.3-678-ge911bdeb-1jammy is installed. 2026-03-10T13:41:04.890 INFO:teuthology.task.install.util:Shipping valgrind.supp... 2026-03-10T13:41:04.890 DEBUG:teuthology.orchestra.run.vm00:> set -ex 2026-03-10T13:41:04.890 DEBUG:teuthology.orchestra.run.vm00:> sudo dd of=/home/ubuntu/cephtest/valgrind.supp 2026-03-10T13:41:04.899 DEBUG:teuthology.orchestra.run.vm07:> set -ex 2026-03-10T13:41:04.899 DEBUG:teuthology.orchestra.run.vm07:> sudo dd of=/home/ubuntu/cephtest/valgrind.supp 2026-03-10T13:41:04.908 DEBUG:teuthology.orchestra.run.vm08:> set -ex 2026-03-10T13:41:04.908 DEBUG:teuthology.orchestra.run.vm08:> sudo dd of=/home/ubuntu/cephtest/valgrind.supp 2026-03-10T13:41:04.937 INFO:teuthology.task.install.util:Shipping 'daemon-helper'... 2026-03-10T13:41:04.937 DEBUG:teuthology.orchestra.run.vm00:> set -ex 2026-03-10T13:41:04.937 DEBUG:teuthology.orchestra.run.vm00:> sudo dd of=/usr/bin/daemon-helper 2026-03-10T13:41:04.948 DEBUG:teuthology.orchestra.run.vm00:> sudo chmod a=rx -- /usr/bin/daemon-helper 2026-03-10T13:41:04.996 DEBUG:teuthology.orchestra.run.vm07:> set -ex 2026-03-10T13:41:04.996 DEBUG:teuthology.orchestra.run.vm07:> sudo dd of=/usr/bin/daemon-helper 2026-03-10T13:41:05.004 DEBUG:teuthology.orchestra.run.vm07:> sudo chmod a=rx -- /usr/bin/daemon-helper 2026-03-10T13:41:05.059 DEBUG:teuthology.orchestra.run.vm08:> set -ex 2026-03-10T13:41:05.059 DEBUG:teuthology.orchestra.run.vm08:> sudo dd of=/usr/bin/daemon-helper 2026-03-10T13:41:05.066 DEBUG:teuthology.orchestra.run.vm08:> sudo chmod a=rx -- /usr/bin/daemon-helper 2026-03-10T13:41:05.117 INFO:teuthology.task.install.util:Shipping 'adjust-ulimits'... 2026-03-10T13:41:05.117 DEBUG:teuthology.orchestra.run.vm00:> set -ex 2026-03-10T13:41:05.117 DEBUG:teuthology.orchestra.run.vm00:> sudo dd of=/usr/bin/adjust-ulimits 2026-03-10T13:41:05.125 DEBUG:teuthology.orchestra.run.vm00:> sudo chmod a=rx -- /usr/bin/adjust-ulimits 2026-03-10T13:41:05.178 DEBUG:teuthology.orchestra.run.vm07:> set -ex 2026-03-10T13:41:05.178 DEBUG:teuthology.orchestra.run.vm07:> sudo dd of=/usr/bin/adjust-ulimits 2026-03-10T13:41:05.185 DEBUG:teuthology.orchestra.run.vm07:> sudo chmod a=rx -- /usr/bin/adjust-ulimits 2026-03-10T13:41:05.235 DEBUG:teuthology.orchestra.run.vm08:> set -ex 2026-03-10T13:41:05.235 DEBUG:teuthology.orchestra.run.vm08:> sudo dd of=/usr/bin/adjust-ulimits 2026-03-10T13:41:05.243 DEBUG:teuthology.orchestra.run.vm08:> sudo chmod a=rx -- /usr/bin/adjust-ulimits 2026-03-10T13:41:05.292 INFO:teuthology.task.install.util:Shipping 'stdin-killer'... 2026-03-10T13:41:05.293 DEBUG:teuthology.orchestra.run.vm00:> set -ex 2026-03-10T13:41:05.293 DEBUG:teuthology.orchestra.run.vm00:> sudo dd of=/usr/bin/stdin-killer 2026-03-10T13:41:05.300 DEBUG:teuthology.orchestra.run.vm00:> sudo chmod a=rx -- /usr/bin/stdin-killer 2026-03-10T13:41:05.348 DEBUG:teuthology.orchestra.run.vm07:> set -ex 2026-03-10T13:41:05.349 DEBUG:teuthology.orchestra.run.vm07:> sudo dd of=/usr/bin/stdin-killer 2026-03-10T13:41:05.355 DEBUG:teuthology.orchestra.run.vm07:> sudo chmod a=rx -- /usr/bin/stdin-killer 2026-03-10T13:41:05.402 DEBUG:teuthology.orchestra.run.vm08:> set -ex 2026-03-10T13:41:05.402 DEBUG:teuthology.orchestra.run.vm08:> sudo dd of=/usr/bin/stdin-killer 2026-03-10T13:41:05.409 DEBUG:teuthology.orchestra.run.vm08:> sudo chmod a=rx -- /usr/bin/stdin-killer 2026-03-10T13:41:05.457 INFO:teuthology.run_tasks:Running task cephadm... 2026-03-10T13:41:05.503 INFO:tasks.cephadm:Config: {'conf': {'global': {'mon election default strategy': 1}, 'mgr': {'debug mgr': 20, 'debug ms': 1, 'mgr/cephadm/use_agent': False}, 'mon': {'debug mon': 20, 'debug ms': 1, 'debug paxos': 20}, 'osd': {'debug ms': 1, 'debug osd': 20, 'osd mclock iops capacity threshold hdd': 49000}}, 'flavor': 'default', 'log-ignorelist': ['\\(MDS_ALL_DOWN\\)', '\\(MDS_UP_LESS_THAN_MAX\\)', 'MON_DOWN', 'mons down', 'mon down', 'out of quorum', 'CEPHADM_STRAY_DAEMON', 'CEPHADM_FAILED_DAEMON'], 'log-only-match': ['CEPHADM_'], 'sha1': 'e911bdebe5c8faa3800735d1568fcdca65db60df'} 2026-03-10T13:41:05.503 INFO:tasks.cephadm:Cluster image is quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df 2026-03-10T13:41:05.503 INFO:tasks.cephadm:Cluster fsid is c9620084-1c86-11f1-bcc5-e3fb709eab0a 2026-03-10T13:41:05.503 INFO:tasks.cephadm:Choosing monitor IPs and ports... 2026-03-10T13:41:05.503 INFO:tasks.cephadm:Monitor IPs: {'mon.a': '192.168.123.100', 'mon.b': '192.168.123.107', 'mon.c': '192.168.123.108'} 2026-03-10T13:41:05.503 INFO:tasks.cephadm:First mon is mon.a on vm00 2026-03-10T13:41:05.503 INFO:tasks.cephadm:First mgr is a 2026-03-10T13:41:05.503 INFO:tasks.cephadm:Normalizing hostnames... 2026-03-10T13:41:05.503 DEBUG:teuthology.orchestra.run.vm00:> sudo hostname $(hostname -s) 2026-03-10T13:41:05.511 DEBUG:teuthology.orchestra.run.vm07:> sudo hostname $(hostname -s) 2026-03-10T13:41:05.518 DEBUG:teuthology.orchestra.run.vm08:> sudo hostname $(hostname -s) 2026-03-10T13:41:05.525 INFO:tasks.cephadm:Downloading "compiled" cephadm from cachra 2026-03-10T13:41:05.526 DEBUG:teuthology.packaging:Querying https://shaman.ceph.com/api/search?status=ready&project=ceph&flavor=default&distros=ubuntu%2F22.04%2Fx86_64&sha1=e911bdebe5c8faa3800735d1568fcdca65db60df 2026-03-10T13:41:06.200 INFO:tasks.cephadm:builder_project result: [{'url': 'https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default/', 'chacra_url': 'https://1.chacra.ceph.com/repos/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default/', 'ref': 'squid', 'sha1': 'e911bdebe5c8faa3800735d1568fcdca65db60df', 'distro': 'ubuntu', 'distro_version': '22.04', 'distro_codename': 'jammy', 'modified': '2026-02-25 19:37:07.680480', 'status': 'ready', 'flavor': 'default', 'project': 'ceph', 'archs': ['x86_64'], 'extra': {'version': '19.2.3-678-ge911bdeb', 'package_manager_version': '19.2.3-678-ge911bdeb-1jammy', 'build_url': 'https://jenkins.ceph.com/job/ceph-dev-pipeline/3275/', 'root_build_cause': '', 'node_name': '10.20.192.98+toko08', 'job_name': 'ceph-dev-pipeline'}}] 2026-03-10T13:41:06.803 INFO:tasks.util.chacra:got chacra host 1.chacra.ceph.com, ref squid, sha1 e911bdebe5c8faa3800735d1568fcdca65db60df from https://shaman.ceph.com/api/search/?project=ceph&distros=ubuntu%2F22.04%2Fx86_64&flavor=default&sha1=e911bdebe5c8faa3800735d1568fcdca65db60df 2026-03-10T13:41:06.804 INFO:tasks.cephadm:Discovered cachra url: https://1.chacra.ceph.com/binaries/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/x86_64/flavors/default/cephadm 2026-03-10T13:41:06.804 INFO:tasks.cephadm:Downloading cephadm from url: https://1.chacra.ceph.com/binaries/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/x86_64/flavors/default/cephadm 2026-03-10T13:41:06.804 DEBUG:teuthology.orchestra.run.vm00:> curl --silent -L https://1.chacra.ceph.com/binaries/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/x86_64/flavors/default/cephadm > /home/ubuntu/cephtest/cephadm && ls -l /home/ubuntu/cephtest/cephadm 2026-03-10T13:41:08.197 INFO:teuthology.orchestra.run.vm00.stdout:-rw-rw-r-- 1 ubuntu ubuntu 795696 Mar 10 13:41 /home/ubuntu/cephtest/cephadm 2026-03-10T13:41:08.197 DEBUG:teuthology.orchestra.run.vm07:> curl --silent -L https://1.chacra.ceph.com/binaries/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/x86_64/flavors/default/cephadm > /home/ubuntu/cephtest/cephadm && ls -l /home/ubuntu/cephtest/cephadm 2026-03-10T13:41:09.466 INFO:teuthology.orchestra.run.vm07.stdout:-rw-rw-r-- 1 ubuntu ubuntu 795696 Mar 10 13:41 /home/ubuntu/cephtest/cephadm 2026-03-10T13:41:09.466 DEBUG:teuthology.orchestra.run.vm08:> curl --silent -L https://1.chacra.ceph.com/binaries/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/x86_64/flavors/default/cephadm > /home/ubuntu/cephtest/cephadm && ls -l /home/ubuntu/cephtest/cephadm 2026-03-10T13:41:10.773 INFO:teuthology.orchestra.run.vm08.stdout:-rw-rw-r-- 1 ubuntu ubuntu 795696 Mar 10 13:41 /home/ubuntu/cephtest/cephadm 2026-03-10T13:41:10.773 DEBUG:teuthology.orchestra.run.vm00:> test -s /home/ubuntu/cephtest/cephadm && test $(stat -c%s /home/ubuntu/cephtest/cephadm) -gt 1000 && chmod +x /home/ubuntu/cephtest/cephadm 2026-03-10T13:41:10.778 DEBUG:teuthology.orchestra.run.vm07:> test -s /home/ubuntu/cephtest/cephadm && test $(stat -c%s /home/ubuntu/cephtest/cephadm) -gt 1000 && chmod +x /home/ubuntu/cephtest/cephadm 2026-03-10T13:41:10.782 DEBUG:teuthology.orchestra.run.vm08:> test -s /home/ubuntu/cephtest/cephadm && test $(stat -c%s /home/ubuntu/cephtest/cephadm) -gt 1000 && chmod +x /home/ubuntu/cephtest/cephadm 2026-03-10T13:41:10.790 INFO:tasks.cephadm:Pulling image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df on all hosts... 2026-03-10T13:41:10.790 DEBUG:teuthology.orchestra.run.vm00:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df pull 2026-03-10T13:41:10.822 DEBUG:teuthology.orchestra.run.vm07:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df pull 2026-03-10T13:41:10.825 DEBUG:teuthology.orchestra.run.vm08:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df pull 2026-03-10T13:41:10.919 INFO:teuthology.orchestra.run.vm00.stderr:Pulling container image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df... 2026-03-10T13:41:10.931 INFO:teuthology.orchestra.run.vm08.stderr:Pulling container image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df... 2026-03-10T13:41:10.932 INFO:teuthology.orchestra.run.vm07.stderr:Pulling container image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df... 2026-03-10T13:41:55.686 INFO:teuthology.orchestra.run.vm08.stdout:{ 2026-03-10T13:41:55.686 INFO:teuthology.orchestra.run.vm08.stdout: "ceph_version": "ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable)", 2026-03-10T13:41:55.686 INFO:teuthology.orchestra.run.vm08.stdout: "image_id": "654f31e6858eb235bbece362255b685a945f2b6a367e2b88c4930c984fbb214c", 2026-03-10T13:41:55.686 INFO:teuthology.orchestra.run.vm08.stdout: "repo_digests": [ 2026-03-10T13:41:55.686 INFO:teuthology.orchestra.run.vm08.stdout: "quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc" 2026-03-10T13:41:55.686 INFO:teuthology.orchestra.run.vm08.stdout: ] 2026-03-10T13:41:55.686 INFO:teuthology.orchestra.run.vm08.stdout:} 2026-03-10T13:41:59.805 INFO:teuthology.orchestra.run.vm00.stdout:{ 2026-03-10T13:41:59.806 INFO:teuthology.orchestra.run.vm00.stdout: "ceph_version": "ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable)", 2026-03-10T13:41:59.806 INFO:teuthology.orchestra.run.vm00.stdout: "image_id": "654f31e6858eb235bbece362255b685a945f2b6a367e2b88c4930c984fbb214c", 2026-03-10T13:41:59.806 INFO:teuthology.orchestra.run.vm00.stdout: "repo_digests": [ 2026-03-10T13:41:59.806 INFO:teuthology.orchestra.run.vm00.stdout: "quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc" 2026-03-10T13:41:59.806 INFO:teuthology.orchestra.run.vm00.stdout: ] 2026-03-10T13:41:59.806 INFO:teuthology.orchestra.run.vm00.stdout:} 2026-03-10T13:42:04.920 INFO:teuthology.orchestra.run.vm07.stdout:{ 2026-03-10T13:42:04.920 INFO:teuthology.orchestra.run.vm07.stdout: "ceph_version": "ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable)", 2026-03-10T13:42:04.920 INFO:teuthology.orchestra.run.vm07.stdout: "image_id": "654f31e6858eb235bbece362255b685a945f2b6a367e2b88c4930c984fbb214c", 2026-03-10T13:42:04.920 INFO:teuthology.orchestra.run.vm07.stdout: "repo_digests": [ 2026-03-10T13:42:04.920 INFO:teuthology.orchestra.run.vm07.stdout: "quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc" 2026-03-10T13:42:04.920 INFO:teuthology.orchestra.run.vm07.stdout: ] 2026-03-10T13:42:04.920 INFO:teuthology.orchestra.run.vm07.stdout:} 2026-03-10T13:42:04.930 DEBUG:teuthology.orchestra.run.vm00:> sudo mkdir -p /etc/ceph 2026-03-10T13:42:04.938 DEBUG:teuthology.orchestra.run.vm07:> sudo mkdir -p /etc/ceph 2026-03-10T13:42:04.945 DEBUG:teuthology.orchestra.run.vm08:> sudo mkdir -p /etc/ceph 2026-03-10T13:42:04.953 DEBUG:teuthology.orchestra.run.vm00:> sudo chmod 777 /etc/ceph 2026-03-10T13:42:04.988 DEBUG:teuthology.orchestra.run.vm07:> sudo chmod 777 /etc/ceph 2026-03-10T13:42:04.996 DEBUG:teuthology.orchestra.run.vm08:> sudo chmod 777 /etc/ceph 2026-03-10T13:42:05.003 INFO:tasks.cephadm:Writing seed config... 2026-03-10T13:42:05.004 INFO:tasks.cephadm: override: [global] mon election default strategy = 1 2026-03-10T13:42:05.004 INFO:tasks.cephadm: override: [mgr] debug mgr = 20 2026-03-10T13:42:05.004 INFO:tasks.cephadm: override: [mgr] debug ms = 1 2026-03-10T13:42:05.004 INFO:tasks.cephadm: override: [mgr] mgr/cephadm/use_agent = False 2026-03-10T13:42:05.004 INFO:tasks.cephadm: override: [mon] debug mon = 20 2026-03-10T13:42:05.004 INFO:tasks.cephadm: override: [mon] debug ms = 1 2026-03-10T13:42:05.004 INFO:tasks.cephadm: override: [mon] debug paxos = 20 2026-03-10T13:42:05.004 INFO:tasks.cephadm: override: [osd] debug ms = 1 2026-03-10T13:42:05.004 INFO:tasks.cephadm: override: [osd] debug osd = 20 2026-03-10T13:42:05.004 INFO:tasks.cephadm: override: [osd] osd mclock iops capacity threshold hdd = 49000 2026-03-10T13:42:05.004 DEBUG:teuthology.orchestra.run.vm00:> set -ex 2026-03-10T13:42:05.004 DEBUG:teuthology.orchestra.run.vm00:> dd of=/home/ubuntu/cephtest/seed.ceph.conf 2026-03-10T13:42:05.032 DEBUG:tasks.cephadm:Final config: [global] # make logging friendly to teuthology log_to_file = true log_to_stderr = false log to journald = false mon cluster log to file = true mon cluster log file level = debug mon clock drift allowed = 1.000 # replicate across OSDs, not hosts osd crush chooseleaf type = 0 #osd pool default size = 2 osd pool default erasure code profile = plugin=jerasure technique=reed_sol_van k=2 m=1 crush-failure-domain=osd # enable some debugging auth debug = true ms die on old message = true ms die on bug = true debug asserts on shutdown = true # adjust warnings mon max pg per osd = 10000# >= luminous mon pg warn max object skew = 0 mon osd allow primary affinity = true mon osd allow pg remap = true mon warn on legacy crush tunables = false mon warn on crush straw calc version zero = false mon warn on no sortbitwise = false mon warn on osd down out interval zero = false mon warn on too few osds = false mon_warn_on_pool_pg_num_not_power_of_two = false # disable pg_autoscaler by default for new pools osd_pool_default_pg_autoscale_mode = off # tests delete pools mon allow pool delete = true fsid = c9620084-1c86-11f1-bcc5-e3fb709eab0a mon election default strategy = 1 [osd] osd scrub load threshold = 5.0 osd scrub max interval = 600 osd mclock profile = high_recovery_ops osd recover clone overlap = true osd recovery max chunk = 1048576 osd deep scrub update digest min age = 30 osd map max advance = 10 osd memory target autotune = true # debugging osd debug shutdown = true osd debug op order = true osd debug verify stray on activate = true osd debug pg log writeout = true osd debug verify cached snaps = true osd debug verify missing on start = true osd debug misdirected ops = true osd op queue = debug_random osd op queue cut off = debug_random osd shutdown pgref assert = true bdev debug aio = true osd sloppy crc = true debug ms = 1 debug osd = 20 osd mclock iops capacity threshold hdd = 49000 [mgr] mon reweight min pgs per osd = 4 mon reweight min bytes per osd = 10 mgr/telemetry/nag = false debug mgr = 20 debug ms = 1 mgr/cephadm/use_agent = False [mon] mon data avail warn = 5 mon mgr mkfs grace = 240 mon reweight min pgs per osd = 4 mon osd reporter subtree level = osd mon osd prime pg temp = true mon reweight min bytes per osd = 10 # rotate auth tickets quickly to exercise renewal paths auth mon ticket ttl = 660# 11m auth service ticket ttl = 240# 4m # don't complain about global id reclaim mon_warn_on_insecure_global_id_reclaim = false mon_warn_on_insecure_global_id_reclaim_allowed = false debug mon = 20 debug ms = 1 debug paxos = 20 [client.rgw] rgw cache enabled = true rgw enable ops log = true rgw enable usage log = true 2026-03-10T13:42:05.033 DEBUG:teuthology.orchestra.run.vm00:mon.a> sudo journalctl -f -n 0 -u ceph-c9620084-1c86-11f1-bcc5-e3fb709eab0a@mon.a.service 2026-03-10T13:42:05.074 DEBUG:teuthology.orchestra.run.vm00:mgr.a> sudo journalctl -f -n 0 -u ceph-c9620084-1c86-11f1-bcc5-e3fb709eab0a@mgr.a.service 2026-03-10T13:42:05.118 INFO:tasks.cephadm:Bootstrapping... 2026-03-10T13:42:05.118 DEBUG:teuthology.orchestra.run.vm00:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df -v bootstrap --fsid c9620084-1c86-11f1-bcc5-e3fb709eab0a --config /home/ubuntu/cephtest/seed.ceph.conf --output-config /etc/ceph/ceph.conf --output-keyring /etc/ceph/ceph.client.admin.keyring --output-pub-ssh-key /home/ubuntu/cephtest/ceph.pub --mon-id a --mgr-id a --orphan-initial-daemons --skip-monitoring-stack --mon-ip 192.168.123.100 --skip-admin-label && sudo chmod +r /etc/ceph/ceph.client.admin.keyring 2026-03-10T13:42:05.248 INFO:teuthology.orchestra.run.vm00.stdout:-------------------------------------------------------------------------------- 2026-03-10T13:42:05.249 INFO:teuthology.orchestra.run.vm00.stdout:cephadm ['--image', 'quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df', '-v', 'bootstrap', '--fsid', 'c9620084-1c86-11f1-bcc5-e3fb709eab0a', '--config', '/home/ubuntu/cephtest/seed.ceph.conf', '--output-config', '/etc/ceph/ceph.conf', '--output-keyring', '/etc/ceph/ceph.client.admin.keyring', '--output-pub-ssh-key', '/home/ubuntu/cephtest/ceph.pub', '--mon-id', 'a', '--mgr-id', 'a', '--orphan-initial-daemons', '--skip-monitoring-stack', '--mon-ip', '192.168.123.100', '--skip-admin-label'] 2026-03-10T13:42:05.249 INFO:teuthology.orchestra.run.vm00.stderr:Specifying an fsid for your cluster offers no advantages and may increase the likelihood of fsid conflicts. 2026-03-10T13:42:05.249 INFO:teuthology.orchestra.run.vm00.stdout:Verifying podman|docker is present... 2026-03-10T13:42:05.249 INFO:teuthology.orchestra.run.vm00.stdout:Verifying lvm2 is present... 2026-03-10T13:42:05.249 INFO:teuthology.orchestra.run.vm00.stdout:Verifying time synchronization is in place... 2026-03-10T13:42:05.252 INFO:teuthology.orchestra.run.vm00.stdout:Non-zero exit code 1 from systemctl is-enabled chrony.service 2026-03-10T13:42:05.252 INFO:teuthology.orchestra.run.vm00.stdout:systemctl: stderr Failed to get unit file state for chrony.service: No such file or directory 2026-03-10T13:42:05.254 INFO:teuthology.orchestra.run.vm00.stdout:Non-zero exit code 3 from systemctl is-active chrony.service 2026-03-10T13:42:05.254 INFO:teuthology.orchestra.run.vm00.stdout:systemctl: stdout inactive 2026-03-10T13:42:05.257 INFO:teuthology.orchestra.run.vm00.stdout:Non-zero exit code 1 from systemctl is-enabled chronyd.service 2026-03-10T13:42:05.257 INFO:teuthology.orchestra.run.vm00.stdout:systemctl: stderr Failed to get unit file state for chronyd.service: No such file or directory 2026-03-10T13:42:05.259 INFO:teuthology.orchestra.run.vm00.stdout:Non-zero exit code 3 from systemctl is-active chronyd.service 2026-03-10T13:42:05.259 INFO:teuthology.orchestra.run.vm00.stdout:systemctl: stdout inactive 2026-03-10T13:42:05.261 INFO:teuthology.orchestra.run.vm00.stdout:Non-zero exit code 1 from systemctl is-enabled systemd-timesyncd.service 2026-03-10T13:42:05.261 INFO:teuthology.orchestra.run.vm00.stdout:systemctl: stdout masked 2026-03-10T13:42:05.263 INFO:teuthology.orchestra.run.vm00.stdout:Non-zero exit code 3 from systemctl is-active systemd-timesyncd.service 2026-03-10T13:42:05.263 INFO:teuthology.orchestra.run.vm00.stdout:systemctl: stdout inactive 2026-03-10T13:42:05.265 INFO:teuthology.orchestra.run.vm00.stdout:Non-zero exit code 1 from systemctl is-enabled ntpd.service 2026-03-10T13:42:05.265 INFO:teuthology.orchestra.run.vm00.stdout:systemctl: stderr Failed to get unit file state for ntpd.service: No such file or directory 2026-03-10T13:42:05.267 INFO:teuthology.orchestra.run.vm00.stdout:Non-zero exit code 3 from systemctl is-active ntpd.service 2026-03-10T13:42:05.267 INFO:teuthology.orchestra.run.vm00.stdout:systemctl: stdout inactive 2026-03-10T13:42:05.270 INFO:teuthology.orchestra.run.vm00.stdout:systemctl: stdout enabled 2026-03-10T13:42:05.272 INFO:teuthology.orchestra.run.vm00.stdout:systemctl: stdout active 2026-03-10T13:42:05.272 INFO:teuthology.orchestra.run.vm00.stdout:Unit ntp.service is enabled and running 2026-03-10T13:42:05.272 INFO:teuthology.orchestra.run.vm00.stdout:Repeating the final host check... 2026-03-10T13:42:05.272 INFO:teuthology.orchestra.run.vm00.stdout:docker (/usr/bin/docker) is present 2026-03-10T13:42:05.272 INFO:teuthology.orchestra.run.vm00.stdout:systemctl is present 2026-03-10T13:42:05.272 INFO:teuthology.orchestra.run.vm00.stdout:lvcreate is present 2026-03-10T13:42:05.275 INFO:teuthology.orchestra.run.vm00.stdout:Non-zero exit code 1 from systemctl is-enabled chrony.service 2026-03-10T13:42:05.275 INFO:teuthology.orchestra.run.vm00.stdout:systemctl: stderr Failed to get unit file state for chrony.service: No such file or directory 2026-03-10T13:42:05.277 INFO:teuthology.orchestra.run.vm00.stdout:Non-zero exit code 3 from systemctl is-active chrony.service 2026-03-10T13:42:05.277 INFO:teuthology.orchestra.run.vm00.stdout:systemctl: stdout inactive 2026-03-10T13:42:05.279 INFO:teuthology.orchestra.run.vm00.stdout:Non-zero exit code 1 from systemctl is-enabled chronyd.service 2026-03-10T13:42:05.279 INFO:teuthology.orchestra.run.vm00.stdout:systemctl: stderr Failed to get unit file state for chronyd.service: No such file or directory 2026-03-10T13:42:05.281 INFO:teuthology.orchestra.run.vm00.stdout:Non-zero exit code 3 from systemctl is-active chronyd.service 2026-03-10T13:42:05.281 INFO:teuthology.orchestra.run.vm00.stdout:systemctl: stdout inactive 2026-03-10T13:42:05.283 INFO:teuthology.orchestra.run.vm00.stdout:Non-zero exit code 1 from systemctl is-enabled systemd-timesyncd.service 2026-03-10T13:42:05.283 INFO:teuthology.orchestra.run.vm00.stdout:systemctl: stdout masked 2026-03-10T13:42:05.285 INFO:teuthology.orchestra.run.vm00.stdout:Non-zero exit code 3 from systemctl is-active systemd-timesyncd.service 2026-03-10T13:42:05.285 INFO:teuthology.orchestra.run.vm00.stdout:systemctl: stdout inactive 2026-03-10T13:42:05.287 INFO:teuthology.orchestra.run.vm00.stdout:Non-zero exit code 1 from systemctl is-enabled ntpd.service 2026-03-10T13:42:05.287 INFO:teuthology.orchestra.run.vm00.stdout:systemctl: stderr Failed to get unit file state for ntpd.service: No such file or directory 2026-03-10T13:42:05.289 INFO:teuthology.orchestra.run.vm00.stdout:Non-zero exit code 3 from systemctl is-active ntpd.service 2026-03-10T13:42:05.289 INFO:teuthology.orchestra.run.vm00.stdout:systemctl: stdout inactive 2026-03-10T13:42:05.292 INFO:teuthology.orchestra.run.vm00.stdout:systemctl: stdout enabled 2026-03-10T13:42:05.295 INFO:teuthology.orchestra.run.vm00.stdout:systemctl: stdout active 2026-03-10T13:42:05.295 INFO:teuthology.orchestra.run.vm00.stdout:Unit ntp.service is enabled and running 2026-03-10T13:42:05.295 INFO:teuthology.orchestra.run.vm00.stdout:Host looks OK 2026-03-10T13:42:05.295 INFO:teuthology.orchestra.run.vm00.stdout:Cluster fsid: c9620084-1c86-11f1-bcc5-e3fb709eab0a 2026-03-10T13:42:05.295 INFO:teuthology.orchestra.run.vm00.stdout:Acquiring lock 139887283538480 on /run/cephadm/c9620084-1c86-11f1-bcc5-e3fb709eab0a.lock 2026-03-10T13:42:05.295 INFO:teuthology.orchestra.run.vm00.stdout:Lock 139887283538480 acquired on /run/cephadm/c9620084-1c86-11f1-bcc5-e3fb709eab0a.lock 2026-03-10T13:42:05.295 INFO:teuthology.orchestra.run.vm00.stdout:Verifying IP 192.168.123.100 port 3300 ... 2026-03-10T13:42:05.295 INFO:teuthology.orchestra.run.vm00.stdout:Verifying IP 192.168.123.100 port 6789 ... 2026-03-10T13:42:05.295 INFO:teuthology.orchestra.run.vm00.stdout:Base mon IP(s) is [192.168.123.100:3300, 192.168.123.100:6789], mon addrv is [v2:192.168.123.100:3300,v1:192.168.123.100:6789] 2026-03-10T13:42:05.296 INFO:teuthology.orchestra.run.vm00.stdout:/usr/sbin/ip: stdout default via 192.168.123.1 dev ens3 proto dhcp src 192.168.123.100 metric 100 2026-03-10T13:42:05.296 INFO:teuthology.orchestra.run.vm00.stdout:/usr/sbin/ip: stdout 172.17.0.0/16 dev docker0 proto kernel scope link src 172.17.0.1 linkdown 2026-03-10T13:42:05.296 INFO:teuthology.orchestra.run.vm00.stdout:/usr/sbin/ip: stdout 192.168.123.0/24 dev ens3 proto kernel scope link src 192.168.123.100 metric 100 2026-03-10T13:42:05.296 INFO:teuthology.orchestra.run.vm00.stdout:/usr/sbin/ip: stdout 192.168.123.1 dev ens3 proto dhcp scope link src 192.168.123.100 metric 100 2026-03-10T13:42:05.297 INFO:teuthology.orchestra.run.vm00.stdout:/usr/sbin/ip: stdout ::1 dev lo proto kernel metric 256 pref medium 2026-03-10T13:42:05.297 INFO:teuthology.orchestra.run.vm00.stdout:/usr/sbin/ip: stdout fe80::/64 dev ens3 proto kernel metric 256 pref medium 2026-03-10T13:42:05.299 INFO:teuthology.orchestra.run.vm00.stdout:/usr/sbin/ip: stdout 1: lo: mtu 65536 state UNKNOWN qlen 1000 2026-03-10T13:42:05.299 INFO:teuthology.orchestra.run.vm00.stdout:/usr/sbin/ip: stdout inet6 ::1/128 scope host 2026-03-10T13:42:05.299 INFO:teuthology.orchestra.run.vm00.stdout:/usr/sbin/ip: stdout valid_lft forever preferred_lft forever 2026-03-10T13:42:05.299 INFO:teuthology.orchestra.run.vm00.stdout:/usr/sbin/ip: stdout 2: ens3: mtu 1500 state UP qlen 1000 2026-03-10T13:42:05.299 INFO:teuthology.orchestra.run.vm00.stdout:/usr/sbin/ip: stdout inet6 fe80::5055:ff:fe00:0/64 scope link 2026-03-10T13:42:05.299 INFO:teuthology.orchestra.run.vm00.stdout:/usr/sbin/ip: stdout valid_lft forever preferred_lft forever 2026-03-10T13:42:05.299 INFO:teuthology.orchestra.run.vm00.stdout:Mon IP `192.168.123.100` is in CIDR network `192.168.123.0/24` 2026-03-10T13:42:05.299 INFO:teuthology.orchestra.run.vm00.stdout:Mon IP `192.168.123.100` is in CIDR network `192.168.123.0/24` 2026-03-10T13:42:05.299 INFO:teuthology.orchestra.run.vm00.stdout:Mon IP `192.168.123.100` is in CIDR network `192.168.123.1/32` 2026-03-10T13:42:05.299 INFO:teuthology.orchestra.run.vm00.stdout:Mon IP `192.168.123.100` is in CIDR network `192.168.123.1/32` 2026-03-10T13:42:05.299 INFO:teuthology.orchestra.run.vm00.stdout:Inferred mon public CIDR from local network configuration ['192.168.123.0/24', '192.168.123.0/24', '192.168.123.1/32', '192.168.123.1/32'] 2026-03-10T13:42:05.299 INFO:teuthology.orchestra.run.vm00.stdout:Internal network (--cluster-network) has not been provided, OSD replication will default to the public_network 2026-03-10T13:42:05.299 INFO:teuthology.orchestra.run.vm00.stdout:Pulling container image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df... 2026-03-10T13:42:06.354 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/docker: stdout e911bdebe5c8faa3800735d1568fcdca65db60df: Pulling from ceph-ci/ceph 2026-03-10T13:42:06.354 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/docker: stdout Digest: sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc 2026-03-10T13:42:06.354 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/docker: stdout Status: Image is up to date for quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df 2026-03-10T13:42:06.354 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/docker: stdout quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df 2026-03-10T13:42:06.513 INFO:teuthology.orchestra.run.vm00.stdout:ceph: stdout ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable) 2026-03-10T13:42:06.513 INFO:teuthology.orchestra.run.vm00.stdout:Ceph version: ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable) 2026-03-10T13:42:06.513 INFO:teuthology.orchestra.run.vm00.stdout:Extracting ceph user uid/gid from container image... 2026-03-10T13:42:06.605 INFO:teuthology.orchestra.run.vm00.stdout:stat: stdout 167 167 2026-03-10T13:42:06.605 INFO:teuthology.orchestra.run.vm00.stdout:Creating initial keys... 2026-03-10T13:42:06.704 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-authtool: stdout AQCuH7BpV2RkKBAAFtzwf8iWQHBu9yCWJKIi4g== 2026-03-10T13:42:06.812 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-authtool: stdout AQCuH7BpZeG0LhAA5itE5o5yYo4dJVBChUR4Cg== 2026-03-10T13:42:06.926 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-authtool: stdout AQCuH7Bp2mGkNRAAdvEx261YCw7ZkDeJVXAc2w== 2026-03-10T13:42:06.926 INFO:teuthology.orchestra.run.vm00.stdout:Creating initial monmap... 2026-03-10T13:42:07.040 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/monmaptool: stdout /usr/bin/monmaptool: monmap file /tmp/monmap 2026-03-10T13:42:07.040 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/monmaptool: stdout setting min_mon_release = quincy 2026-03-10T13:42:07.040 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/monmaptool: stdout /usr/bin/monmaptool: set fsid to c9620084-1c86-11f1-bcc5-e3fb709eab0a 2026-03-10T13:42:07.040 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/monmaptool: stdout /usr/bin/monmaptool: writing epoch 0 to /tmp/monmap (1 monitors) 2026-03-10T13:42:07.040 INFO:teuthology.orchestra.run.vm00.stdout:monmaptool for a [v2:192.168.123.100:3300,v1:192.168.123.100:6789] on /usr/bin/monmaptool: monmap file /tmp/monmap 2026-03-10T13:42:07.040 INFO:teuthology.orchestra.run.vm00.stdout:setting min_mon_release = quincy 2026-03-10T13:42:07.040 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/monmaptool: set fsid to c9620084-1c86-11f1-bcc5-e3fb709eab0a 2026-03-10T13:42:07.040 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/monmaptool: writing epoch 0 to /tmp/monmap (1 monitors) 2026-03-10T13:42:07.040 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-10T13:42:07.040 INFO:teuthology.orchestra.run.vm00.stdout:Creating mon... 2026-03-10T13:42:07.158 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T13:42:07.116+0000 7f1f4f584d80 0 set uid:gid to 167:167 (ceph:ceph) 2026-03-10T13:42:07.158 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T13:42:07.116+0000 7f1f4f584d80 1 imported monmap: 2026-03-10T13:42:07.158 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr epoch 0 2026-03-10T13:42:07.158 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr fsid c9620084-1c86-11f1-bcc5-e3fb709eab0a 2026-03-10T13:42:07.158 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr last_changed 2026-03-10T13:42:07.014183+0000 2026-03-10T13:42:07.158 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr created 2026-03-10T13:42:07.014183+0000 2026-03-10T13:42:07.158 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr min_mon_release 17 (quincy) 2026-03-10T13:42:07.158 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr election_strategy: 1 2026-03-10T13:42:07.158 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr 0: [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] mon.a 2026-03-10T13:42:07.158 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr 2026-03-10T13:42:07.158 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T13:42:07.116+0000 7f1f4f584d80 0 /usr/bin/ceph-mon: set fsid to c9620084-1c86-11f1-bcc5-e3fb709eab0a 2026-03-10T13:42:07.158 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T13:42:07.116+0000 7f1f4f584d80 4 rocksdb: RocksDB version: 7.9.2 2026-03-10T13:42:07.158 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr 2026-03-10T13:42:07.158 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T13:42:07.116+0000 7f1f4f584d80 4 rocksdb: Git sha 0 2026-03-10T13:42:07.158 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T13:42:07.116+0000 7f1f4f584d80 4 rocksdb: Compile date 2026-02-25 18:11:04 2026-03-10T13:42:07.158 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T13:42:07.116+0000 7f1f4f584d80 4 rocksdb: DB SUMMARY 2026-03-10T13:42:07.158 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr 2026-03-10T13:42:07.158 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T13:42:07.116+0000 7f1f4f584d80 4 rocksdb: DB Session ID: E6LZHV6KN02V2YCB1QR4 2026-03-10T13:42:07.158 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr 2026-03-10T13:42:07.158 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T13:42:07.116+0000 7f1f4f584d80 4 rocksdb: SST files in /var/lib/ceph/mon/ceph-a/store.db dir, Total Num: 0, files: 2026-03-10T13:42:07.158 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr 2026-03-10T13:42:07.158 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T13:42:07.116+0000 7f1f4f584d80 4 rocksdb: Write Ahead Log file in /var/lib/ceph/mon/ceph-a/store.db: 2026-03-10T13:42:07.158 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr 2026-03-10T13:42:07.158 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T13:42:07.116+0000 7f1f4f584d80 4 rocksdb: Options.error_if_exists: 0 2026-03-10T13:42:07.158 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T13:42:07.116+0000 7f1f4f584d80 4 rocksdb: Options.create_if_missing: 1 2026-03-10T13:42:07.158 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T13:42:07.116+0000 7f1f4f584d80 4 rocksdb: Options.paranoid_checks: 1 2026-03-10T13:42:07.158 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T13:42:07.116+0000 7f1f4f584d80 4 rocksdb: Options.flush_verify_memtable_count: 1 2026-03-10T13:42:07.158 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T13:42:07.116+0000 7f1f4f584d80 4 rocksdb: Options.track_and_verify_wals_in_manifest: 0 2026-03-10T13:42:07.158 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T13:42:07.116+0000 7f1f4f584d80 4 rocksdb: Options.verify_sst_unique_id_in_manifest: 1 2026-03-10T13:42:07.158 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T13:42:07.116+0000 7f1f4f584d80 4 rocksdb: Options.env: 0x55ad4cc19dc0 2026-03-10T13:42:07.158 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T13:42:07.116+0000 7f1f4f584d80 4 rocksdb: Options.fs: PosixFileSystem 2026-03-10T13:42:07.158 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T13:42:07.116+0000 7f1f4f584d80 4 rocksdb: Options.info_log: 0x55ad883c6da0 2026-03-10T13:42:07.161 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T13:42:07.116+0000 7f1f4f584d80 4 rocksdb: Options.max_file_opening_threads: 16 2026-03-10T13:42:07.161 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T13:42:07.116+0000 7f1f4f584d80 4 rocksdb: Options.statistics: (nil) 2026-03-10T13:42:07.161 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T13:42:07.116+0000 7f1f4f584d80 4 rocksdb: Options.use_fsync: 0 2026-03-10T13:42:07.161 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T13:42:07.116+0000 7f1f4f584d80 4 rocksdb: Options.max_log_file_size: 0 2026-03-10T13:42:07.161 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T13:42:07.116+0000 7f1f4f584d80 4 rocksdb: Options.max_manifest_file_size: 1073741824 2026-03-10T13:42:07.161 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T13:42:07.116+0000 7f1f4f584d80 4 rocksdb: Options.log_file_time_to_roll: 0 2026-03-10T13:42:07.161 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T13:42:07.116+0000 7f1f4f584d80 4 rocksdb: Options.keep_log_file_num: 1000 2026-03-10T13:42:07.161 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T13:42:07.116+0000 7f1f4f584d80 4 rocksdb: Options.recycle_log_file_num: 0 2026-03-10T13:42:07.161 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T13:42:07.116+0000 7f1f4f584d80 4 rocksdb: Options.allow_fallocate: 1 2026-03-10T13:42:07.162 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T13:42:07.116+0000 7f1f4f584d80 4 rocksdb: Options.allow_mmap_reads: 0 2026-03-10T13:42:07.162 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T13:42:07.116+0000 7f1f4f584d80 4 rocksdb: Options.allow_mmap_writes: 0 2026-03-10T13:42:07.162 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T13:42:07.116+0000 7f1f4f584d80 4 rocksdb: Options.use_direct_reads: 0 2026-03-10T13:42:07.162 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T13:42:07.116+0000 7f1f4f584d80 4 rocksdb: Options.use_direct_io_for_flush_and_compaction: 0 2026-03-10T13:42:07.162 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T13:42:07.116+0000 7f1f4f584d80 4 rocksdb: Options.create_missing_column_families: 0 2026-03-10T13:42:07.162 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T13:42:07.116+0000 7f1f4f584d80 4 rocksdb: Options.db_log_dir: 2026-03-10T13:42:07.162 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T13:42:07.116+0000 7f1f4f584d80 4 rocksdb: Options.wal_dir: 2026-03-10T13:42:07.162 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T13:42:07.116+0000 7f1f4f584d80 4 rocksdb: Options.table_cache_numshardbits: 6 2026-03-10T13:42:07.162 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T13:42:07.116+0000 7f1f4f584d80 4 rocksdb: Options.WAL_ttl_seconds: 0 2026-03-10T13:42:07.162 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T13:42:07.116+0000 7f1f4f584d80 4 rocksdb: Options.WAL_size_limit_MB: 0 2026-03-10T13:42:07.162 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T13:42:07.116+0000 7f1f4f584d80 4 rocksdb: Options.max_write_batch_group_size_bytes: 1048576 2026-03-10T13:42:07.162 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T13:42:07.116+0000 7f1f4f584d80 4 rocksdb: Options.manifest_preallocation_size: 4194304 2026-03-10T13:42:07.162 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T13:42:07.116+0000 7f1f4f584d80 4 rocksdb: Options.is_fd_close_on_exec: 1 2026-03-10T13:42:07.162 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T13:42:07.116+0000 7f1f4f584d80 4 rocksdb: Options.advise_random_on_open: 1 2026-03-10T13:42:07.162 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T13:42:07.116+0000 7f1f4f584d80 4 rocksdb: Options.db_write_buffer_size: 0 2026-03-10T13:42:07.162 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T13:42:07.116+0000 7f1f4f584d80 4 rocksdb: Options.write_buffer_manager: 0x55ad883bd5e0 2026-03-10T13:42:07.162 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T13:42:07.116+0000 7f1f4f584d80 4 rocksdb: Options.access_hint_on_compaction_start: 1 2026-03-10T13:42:07.162 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T13:42:07.116+0000 7f1f4f584d80 4 rocksdb: Options.random_access_max_buffer_size: 1048576 2026-03-10T13:42:07.162 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T13:42:07.116+0000 7f1f4f584d80 4 rocksdb: Options.use_adaptive_mutex: 0 2026-03-10T13:42:07.162 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T13:42:07.116+0000 7f1f4f584d80 4 rocksdb: Options.rate_limiter: (nil) 2026-03-10T13:42:07.162 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T13:42:07.116+0000 7f1f4f584d80 4 rocksdb: Options.sst_file_manager.rate_bytes_per_sec: 0 2026-03-10T13:42:07.162 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T13:42:07.116+0000 7f1f4f584d80 4 rocksdb: Options.wal_recovery_mode: 2 2026-03-10T13:42:07.162 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T13:42:07.116+0000 7f1f4f584d80 4 rocksdb: Options.enable_thread_tracking: 0 2026-03-10T13:42:07.162 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T13:42:07.116+0000 7f1f4f584d80 4 rocksdb: Options.enable_pipelined_write: 0 2026-03-10T13:42:07.162 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T13:42:07.116+0000 7f1f4f584d80 4 rocksdb: Options.unordered_write: 0 2026-03-10T13:42:07.162 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T13:42:07.116+0000 7f1f4f584d80 4 rocksdb: Options.allow_concurrent_memtable_write: 1 2026-03-10T13:42:07.162 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T13:42:07.116+0000 7f1f4f584d80 4 rocksdb: Options.enable_write_thread_adaptive_yield: 1 2026-03-10T13:42:07.162 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T13:42:07.116+0000 7f1f4f584d80 4 rocksdb: Options.write_thread_max_yield_usec: 100 2026-03-10T13:42:07.162 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T13:42:07.116+0000 7f1f4f584d80 4 rocksdb: Options.write_thread_slow_yield_usec: 3 2026-03-10T13:42:07.162 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T13:42:07.116+0000 7f1f4f584d80 4 rocksdb: Options.row_cache: None 2026-03-10T13:42:07.162 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T13:42:07.116+0000 7f1f4f584d80 4 rocksdb: Options.wal_filter: None 2026-03-10T13:42:07.162 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T13:42:07.116+0000 7f1f4f584d80 4 rocksdb: Options.avoid_flush_during_recovery: 0 2026-03-10T13:42:07.162 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T13:42:07.116+0000 7f1f4f584d80 4 rocksdb: Options.allow_ingest_behind: 0 2026-03-10T13:42:07.162 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T13:42:07.116+0000 7f1f4f584d80 4 rocksdb: Options.two_write_queues: 0 2026-03-10T13:42:07.162 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T13:42:07.116+0000 7f1f4f584d80 4 rocksdb: Options.manual_wal_flush: 0 2026-03-10T13:42:07.162 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T13:42:07.116+0000 7f1f4f584d80 4 rocksdb: Options.wal_compression: 0 2026-03-10T13:42:07.162 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T13:42:07.116+0000 7f1f4f584d80 4 rocksdb: Options.atomic_flush: 0 2026-03-10T13:42:07.162 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T13:42:07.116+0000 7f1f4f584d80 4 rocksdb: Options.avoid_unnecessary_blocking_io: 0 2026-03-10T13:42:07.162 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T13:42:07.116+0000 7f1f4f584d80 4 rocksdb: Options.persist_stats_to_disk: 0 2026-03-10T13:42:07.162 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T13:42:07.116+0000 7f1f4f584d80 4 rocksdb: Options.write_dbid_to_manifest: 0 2026-03-10T13:42:07.162 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T13:42:07.116+0000 7f1f4f584d80 4 rocksdb: Options.log_readahead_size: 0 2026-03-10T13:42:07.162 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T13:42:07.116+0000 7f1f4f584d80 4 rocksdb: Options.file_checksum_gen_factory: Unknown 2026-03-10T13:42:07.162 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T13:42:07.116+0000 7f1f4f584d80 4 rocksdb: Options.best_efforts_recovery: 0 2026-03-10T13:42:07.162 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T13:42:07.116+0000 7f1f4f584d80 4 rocksdb: Options.max_bgerror_resume_count: 2147483647 2026-03-10T13:42:07.162 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T13:42:07.116+0000 7f1f4f584d80 4 rocksdb: Options.bgerror_resume_retry_interval: 1000000 2026-03-10T13:42:07.162 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T13:42:07.116+0000 7f1f4f584d80 4 rocksdb: Options.allow_data_in_errors: 0 2026-03-10T13:42:07.162 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T13:42:07.116+0000 7f1f4f584d80 4 rocksdb: Options.db_host_id: __hostname__ 2026-03-10T13:42:07.162 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T13:42:07.116+0000 7f1f4f584d80 4 rocksdb: Options.enforce_single_del_contracts: true 2026-03-10T13:42:07.162 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T13:42:07.116+0000 7f1f4f584d80 4 rocksdb: Options.max_background_jobs: 2 2026-03-10T13:42:07.162 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T13:42:07.116+0000 7f1f4f584d80 4 rocksdb: Options.max_background_compactions: -1 2026-03-10T13:42:07.162 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T13:42:07.116+0000 7f1f4f584d80 4 rocksdb: Options.max_subcompactions: 1 2026-03-10T13:42:07.162 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T13:42:07.116+0000 7f1f4f584d80 4 rocksdb: Options.avoid_flush_during_shutdown: 0 2026-03-10T13:42:07.162 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T13:42:07.116+0000 7f1f4f584d80 4 rocksdb: Options.writable_file_max_buffer_size: 1048576 2026-03-10T13:42:07.162 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T13:42:07.116+0000 7f1f4f584d80 4 rocksdb: Options.delayed_write_rate : 16777216 2026-03-10T13:42:07.162 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T13:42:07.116+0000 7f1f4f584d80 4 rocksdb: Options.max_total_wal_size: 0 2026-03-10T13:42:07.162 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T13:42:07.116+0000 7f1f4f584d80 4 rocksdb: Options.delete_obsolete_files_period_micros: 21600000000 2026-03-10T13:42:07.162 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T13:42:07.116+0000 7f1f4f584d80 4 rocksdb: Options.stats_dump_period_sec: 600 2026-03-10T13:42:07.162 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T13:42:07.116+0000 7f1f4f584d80 4 rocksdb: Options.stats_persist_period_sec: 600 2026-03-10T13:42:07.162 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T13:42:07.116+0000 7f1f4f584d80 4 rocksdb: Options.stats_history_buffer_size: 1048576 2026-03-10T13:42:07.162 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T13:42:07.116+0000 7f1f4f584d80 4 rocksdb: Options.max_open_files: -1 2026-03-10T13:42:07.162 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T13:42:07.116+0000 7f1f4f584d80 4 rocksdb: Options.bytes_per_sync: 0 2026-03-10T13:42:07.163 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T13:42:07.116+0000 7f1f4f584d80 4 rocksdb: Options.wal_bytes_per_sync: 0 2026-03-10T13:42:07.163 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T13:42:07.116+0000 7f1f4f584d80 4 rocksdb: Options.strict_bytes_per_sync: 0 2026-03-10T13:42:07.163 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T13:42:07.116+0000 7f1f4f584d80 4 rocksdb: Options.compaction_readahead_size: 0 2026-03-10T13:42:07.163 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T13:42:07.116+0000 7f1f4f584d80 4 rocksdb: Options.max_background_flushes: -1 2026-03-10T13:42:07.163 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T13:42:07.116+0000 7f1f4f584d80 4 rocksdb: Compression algorithms supported: 2026-03-10T13:42:07.163 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T13:42:07.116+0000 7f1f4f584d80 4 rocksdb: kZSTD supported: 0 2026-03-10T13:42:07.163 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T13:42:07.116+0000 7f1f4f584d80 4 rocksdb: kXpressCompression supported: 0 2026-03-10T13:42:07.163 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T13:42:07.116+0000 7f1f4f584d80 4 rocksdb: kBZip2Compression supported: 0 2026-03-10T13:42:07.163 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T13:42:07.116+0000 7f1f4f584d80 4 rocksdb: kZSTDNotFinalCompression supported: 0 2026-03-10T13:42:07.163 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T13:42:07.116+0000 7f1f4f584d80 4 rocksdb: kLZ4Compression supported: 1 2026-03-10T13:42:07.163 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T13:42:07.116+0000 7f1f4f584d80 4 rocksdb: kZlibCompression supported: 1 2026-03-10T13:42:07.163 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T13:42:07.116+0000 7f1f4f584d80 4 rocksdb: kLZ4HCCompression supported: 1 2026-03-10T13:42:07.163 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T13:42:07.116+0000 7f1f4f584d80 4 rocksdb: kSnappyCompression supported: 1 2026-03-10T13:42:07.163 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T13:42:07.116+0000 7f1f4f584d80 4 rocksdb: Fast CRC32 supported: Supported on x86 2026-03-10T13:42:07.163 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T13:42:07.116+0000 7f1f4f584d80 4 rocksdb: DMutex implementation: pthread_mutex_t 2026-03-10T13:42:07.163 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T13:42:07.116+0000 7f1f4f584d80 4 rocksdb: [db/db_impl/db_impl_open.cc:317] Creating manifest 1 2026-03-10T13:42:07.163 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr 2026-03-10T13:42:07.163 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T13:42:07.120+0000 7f1f4f584d80 4 rocksdb: [db/version_set.cc:5527] Recovering from manifest file: /var/lib/ceph/mon/ceph-a/store.db/MANIFEST-000001 2026-03-10T13:42:07.163 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr 2026-03-10T13:42:07.163 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T13:42:07.120+0000 7f1f4f584d80 4 rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]: 2026-03-10T13:42:07.163 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr 2026-03-10T13:42:07.163 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T13:42:07.120+0000 7f1f4f584d80 4 rocksdb: Options.comparator: leveldb.BytewiseComparator 2026-03-10T13:42:07.163 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T13:42:07.120+0000 7f1f4f584d80 4 rocksdb: Options.merge_operator: 2026-03-10T13:42:07.163 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T13:42:07.120+0000 7f1f4f584d80 4 rocksdb: Options.compaction_filter: None 2026-03-10T13:42:07.163 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T13:42:07.120+0000 7f1f4f584d80 4 rocksdb: Options.compaction_filter_factory: None 2026-03-10T13:42:07.163 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T13:42:07.120+0000 7f1f4f584d80 4 rocksdb: Options.sst_partitioner_factory: None 2026-03-10T13:42:07.163 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T13:42:07.120+0000 7f1f4f584d80 4 rocksdb: Options.memtable_factory: SkipListFactory 2026-03-10T13:42:07.163 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T13:42:07.120+0000 7f1f4f584d80 4 rocksdb: Options.table_factory: BlockBasedTable 2026-03-10T13:42:07.163 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T13:42:07.120+0000 7f1f4f584d80 4 rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55ad883b9520) 2026-03-10T13:42:07.163 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr cache_index_and_filter_blocks: 1 2026-03-10T13:42:07.163 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr cache_index_and_filter_blocks_with_high_priority: 0 2026-03-10T13:42:07.163 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr pin_l0_filter_and_index_blocks_in_cache: 0 2026-03-10T13:42:07.163 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr pin_top_level_index_and_filter: 1 2026-03-10T13:42:07.163 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr index_type: 0 2026-03-10T13:42:07.163 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr data_block_index_type: 0 2026-03-10T13:42:07.163 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr index_shortening: 1 2026-03-10T13:42:07.163 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr data_block_hash_table_util_ratio: 0.750000 2026-03-10T13:42:07.163 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr checksum: 4 2026-03-10T13:42:07.163 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr no_block_cache: 0 2026-03-10T13:42:07.163 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr block_cache: 0x55ad883df350 2026-03-10T13:42:07.163 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr block_cache_name: BinnedLRUCache 2026-03-10T13:42:07.163 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr block_cache_options: 2026-03-10T13:42:07.163 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr capacity : 536870912 2026-03-10T13:42:07.163 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr num_shard_bits : 4 2026-03-10T13:42:07.163 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr strict_capacity_limit : 0 2026-03-10T13:42:07.163 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr high_pri_pool_ratio: 0.000 2026-03-10T13:42:07.163 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr block_cache_compressed: (nil) 2026-03-10T13:42:07.163 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr persistent_cache: (nil) 2026-03-10T13:42:07.163 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr block_size: 4096 2026-03-10T13:42:07.163 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr block_size_deviation: 10 2026-03-10T13:42:07.163 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr block_restart_interval: 16 2026-03-10T13:42:07.163 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr index_block_restart_interval: 1 2026-03-10T13:42:07.163 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr metadata_block_size: 4096 2026-03-10T13:42:07.163 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr partition_filters: 0 2026-03-10T13:42:07.163 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr use_delta_encoding: 1 2026-03-10T13:42:07.163 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr filter_policy: bloomfilter 2026-03-10T13:42:07.163 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr whole_key_filtering: 1 2026-03-10T13:42:07.163 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr verify_compression: 0 2026-03-10T13:42:07.163 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr read_amp_bytes_per_bit: 0 2026-03-10T13:42:07.163 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr format_version: 5 2026-03-10T13:42:07.163 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr enable_index_compression: 1 2026-03-10T13:42:07.163 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr block_align: 0 2026-03-10T13:42:07.163 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr max_auto_readahead_size: 262144 2026-03-10T13:42:07.163 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr prepopulate_block_cache: 0 2026-03-10T13:42:07.163 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr initial_auto_readahead_size: 8192 2026-03-10T13:42:07.164 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr num_file_reads_for_auto_readahead: 2 2026-03-10T13:42:07.164 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr 2026-03-10T13:42:07.164 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T13:42:07.120+0000 7f1f4f584d80 4 rocksdb: Options.write_buffer_size: 33554432 2026-03-10T13:42:07.164 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T13:42:07.120+0000 7f1f4f584d80 4 rocksdb: Options.max_write_buffer_number: 2 2026-03-10T13:42:07.164 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T13:42:07.120+0000 7f1f4f584d80 4 rocksdb: Options.compression: NoCompression 2026-03-10T13:42:07.164 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T13:42:07.120+0000 7f1f4f584d80 4 rocksdb: Options.bottommost_compression: Disabled 2026-03-10T13:42:07.164 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T13:42:07.120+0000 7f1f4f584d80 4 rocksdb: Options.prefix_extractor: nullptr 2026-03-10T13:42:07.164 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T13:42:07.120+0000 7f1f4f584d80 4 rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr 2026-03-10T13:42:07.164 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T13:42:07.120+0000 7f1f4f584d80 4 rocksdb: Options.num_levels: 7 2026-03-10T13:42:07.164 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T13:42:07.120+0000 7f1f4f584d80 4 rocksdb: Options.min_write_buffer_number_to_merge: 1 2026-03-10T13:42:07.164 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T13:42:07.120+0000 7f1f4f584d80 4 rocksdb: Options.max_write_buffer_number_to_maintain: 0 2026-03-10T13:42:07.164 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T13:42:07.120+0000 7f1f4f584d80 4 rocksdb: Options.max_write_buffer_size_to_maintain: 0 2026-03-10T13:42:07.164 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T13:42:07.120+0000 7f1f4f584d80 4 rocksdb: Options.bottommost_compression_opts.window_bits: -14 2026-03-10T13:42:07.164 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T13:42:07.120+0000 7f1f4f584d80 4 rocksdb: Options.bottommost_compression_opts.level: 32767 2026-03-10T13:42:07.164 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T13:42:07.120+0000 7f1f4f584d80 4 rocksdb: Options.bottommost_compression_opts.strategy: 0 2026-03-10T13:42:07.164 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T13:42:07.120+0000 7f1f4f584d80 4 rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 2026-03-10T13:42:07.164 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T13:42:07.120+0000 7f1f4f584d80 4 rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 2026-03-10T13:42:07.164 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T13:42:07.120+0000 7f1f4f584d80 4 rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 2026-03-10T13:42:07.164 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T13:42:07.120+0000 7f1f4f584d80 4 rocksdb: Options.bottommost_compression_opts.enabled: false 2026-03-10T13:42:07.164 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T13:42:07.120+0000 7f1f4f584d80 4 rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 2026-03-10T13:42:07.164 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T13:42:07.120+0000 7f1f4f584d80 4 rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true 2026-03-10T13:42:07.164 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T13:42:07.120+0000 7f1f4f584d80 4 rocksdb: Options.compression_opts.window_bits: -14 2026-03-10T13:42:07.164 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T13:42:07.120+0000 7f1f4f584d80 4 rocksdb: Options.compression_opts.level: 32767 2026-03-10T13:42:07.164 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T13:42:07.120+0000 7f1f4f584d80 4 rocksdb: Options.compression_opts.strategy: 0 2026-03-10T13:42:07.164 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T13:42:07.120+0000 7f1f4f584d80 4 rocksdb: Options.compression_opts.max_dict_bytes: 0 2026-03-10T13:42:07.164 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T13:42:07.120+0000 7f1f4f584d80 4 rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 2026-03-10T13:42:07.164 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T13:42:07.120+0000 7f1f4f584d80 4 rocksdb: Options.compression_opts.use_zstd_dict_trainer: true 2026-03-10T13:42:07.164 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T13:42:07.120+0000 7f1f4f584d80 4 rocksdb: Options.compression_opts.parallel_threads: 1 2026-03-10T13:42:07.164 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T13:42:07.120+0000 7f1f4f584d80 4 rocksdb: Options.compression_opts.enabled: false 2026-03-10T13:42:07.164 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T13:42:07.120+0000 7f1f4f584d80 4 rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 2026-03-10T13:42:07.164 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T13:42:07.120+0000 7f1f4f584d80 4 rocksdb: Options.level0_file_num_compaction_trigger: 4 2026-03-10T13:42:07.164 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T13:42:07.120+0000 7f1f4f584d80 4 rocksdb: Options.level0_slowdown_writes_trigger: 20 2026-03-10T13:42:07.164 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T13:42:07.120+0000 7f1f4f584d80 4 rocksdb: Options.level0_stop_writes_trigger: 36 2026-03-10T13:42:07.164 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T13:42:07.120+0000 7f1f4f584d80 4 rocksdb: Options.target_file_size_base: 67108864 2026-03-10T13:42:07.164 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T13:42:07.120+0000 7f1f4f584d80 4 rocksdb: Options.target_file_size_multiplier: 1 2026-03-10T13:42:07.164 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T13:42:07.120+0000 7f1f4f584d80 4 rocksdb: Options.max_bytes_for_level_base: 268435456 2026-03-10T13:42:07.164 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T13:42:07.120+0000 7f1f4f584d80 4 rocksdb: Options.level_compaction_dynamic_level_bytes: 1 2026-03-10T13:42:07.164 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T13:42:07.120+0000 7f1f4f584d80 4 rocksdb: Options.max_bytes_for_level_multiplier: 10.000000 2026-03-10T13:42:07.164 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T13:42:07.120+0000 7f1f4f584d80 4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 2026-03-10T13:42:07.164 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T13:42:07.120+0000 7f1f4f584d80 4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 2026-03-10T13:42:07.164 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T13:42:07.120+0000 7f1f4f584d80 4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 2026-03-10T13:42:07.164 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T13:42:07.120+0000 7f1f4f584d80 4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 2026-03-10T13:42:07.164 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T13:42:07.120+0000 7f1f4f584d80 4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 2026-03-10T13:42:07.164 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T13:42:07.120+0000 7f1f4f584d80 4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 2026-03-10T13:42:07.164 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T13:42:07.120+0000 7f1f4f584d80 4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 2026-03-10T13:42:07.164 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T13:42:07.120+0000 7f1f4f584d80 4 rocksdb: Options.max_sequential_skip_in_iterations: 8 2026-03-10T13:42:07.164 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T13:42:07.120+0000 7f1f4f584d80 4 rocksdb: Options.max_compaction_bytes: 1677721600 2026-03-10T13:42:07.164 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T13:42:07.120+0000 7f1f4f584d80 4 rocksdb: Options.ignore_max_compaction_bytes_for_input: true 2026-03-10T13:42:07.164 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T13:42:07.120+0000 7f1f4f584d80 4 rocksdb: Options.arena_block_size: 1048576 2026-03-10T13:42:07.164 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T13:42:07.120+0000 7f1f4f584d80 4 rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 2026-03-10T13:42:07.164 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T13:42:07.120+0000 7f1f4f584d80 4 rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 2026-03-10T13:42:07.164 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T13:42:07.120+0000 7f1f4f584d80 4 rocksdb: Options.disable_auto_compactions: 0 2026-03-10T13:42:07.164 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T13:42:07.120+0000 7f1f4f584d80 4 rocksdb: Options.compaction_style: kCompactionStyleLevel 2026-03-10T13:42:07.164 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T13:42:07.120+0000 7f1f4f584d80 4 rocksdb: Options.compaction_pri: kMinOverlappingRatio 2026-03-10T13:42:07.164 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T13:42:07.120+0000 7f1f4f584d80 4 rocksdb: Options.compaction_options_universal.size_ratio: 1 2026-03-10T13:42:07.164 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T13:42:07.120+0000 7f1f4f584d80 4 rocksdb: Options.compaction_options_universal.min_merge_width: 2 2026-03-10T13:42:07.164 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T13:42:07.120+0000 7f1f4f584d80 4 rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 2026-03-10T13:42:07.164 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T13:42:07.120+0000 7f1f4f584d80 4 rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 2026-03-10T13:42:07.164 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T13:42:07.120+0000 7f1f4f584d80 4 rocksdb: Options.compaction_options_universal.compression_size_percent: -1 2026-03-10T13:42:07.164 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T13:42:07.120+0000 7f1f4f584d80 4 rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize 2026-03-10T13:42:07.164 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T13:42:07.120+0000 7f1f4f584d80 4 rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 2026-03-10T13:42:07.164 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T13:42:07.120+0000 7f1f4f584d80 4 rocksdb: Options.compaction_options_fifo.allow_compaction: 0 2026-03-10T13:42:07.164 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T13:42:07.120+0000 7f1f4f584d80 4 rocksdb: Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0); 2026-03-10T13:42:07.164 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T13:42:07.120+0000 7f1f4f584d80 4 rocksdb: Options.inplace_update_support: 0 2026-03-10T13:42:07.165 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T13:42:07.120+0000 7f1f4f584d80 4 rocksdb: Options.inplace_update_num_locks: 10000 2026-03-10T13:42:07.165 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T13:42:07.120+0000 7f1f4f584d80 4 rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 2026-03-10T13:42:07.165 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T13:42:07.120+0000 7f1f4f584d80 4 rocksdb: Options.memtable_whole_key_filtering: 0 2026-03-10T13:42:07.165 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T13:42:07.120+0000 7f1f4f584d80 4 rocksdb: Options.memtable_huge_page_size: 0 2026-03-10T13:42:07.165 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T13:42:07.120+0000 7f1f4f584d80 4 rocksdb: Options.bloom_locality: 0 2026-03-10T13:42:07.165 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T13:42:07.120+0000 7f1f4f584d80 4 rocksdb: Options.max_successive_merges: 0 2026-03-10T13:42:07.165 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T13:42:07.120+0000 7f1f4f584d80 4 rocksdb: Options.optimize_filters_for_hits: 0 2026-03-10T13:42:07.165 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T13:42:07.120+0000 7f1f4f584d80 4 rocksdb: Options.paranoid_file_checks: 0 2026-03-10T13:42:07.165 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T13:42:07.120+0000 7f1f4f584d80 4 rocksdb: Options.force_consistency_checks: 1 2026-03-10T13:42:07.165 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T13:42:07.120+0000 7f1f4f584d80 4 rocksdb: Options.report_bg_io_stats: 0 2026-03-10T13:42:07.165 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T13:42:07.120+0000 7f1f4f584d80 4 rocksdb: Options.ttl: 2592000 2026-03-10T13:42:07.165 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T13:42:07.120+0000 7f1f4f584d80 4 rocksdb: Options.periodic_compaction_seconds: 0 2026-03-10T13:42:07.165 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T13:42:07.120+0000 7f1f4f584d80 4 rocksdb: Options.preclude_last_level_data_seconds: 0 2026-03-10T13:42:07.165 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T13:42:07.120+0000 7f1f4f584d80 4 rocksdb: Options.preserve_internal_time_seconds: 0 2026-03-10T13:42:07.165 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T13:42:07.120+0000 7f1f4f584d80 4 rocksdb: Options.enable_blob_files: false 2026-03-10T13:42:07.165 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T13:42:07.120+0000 7f1f4f584d80 4 rocksdb: Options.min_blob_size: 0 2026-03-10T13:42:07.165 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T13:42:07.120+0000 7f1f4f584d80 4 rocksdb: Options.blob_file_size: 268435456 2026-03-10T13:42:07.165 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T13:42:07.120+0000 7f1f4f584d80 4 rocksdb: Options.blob_compression_type: NoCompression 2026-03-10T13:42:07.165 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T13:42:07.120+0000 7f1f4f584d80 4 rocksdb: Options.enable_blob_garbage_collection: false 2026-03-10T13:42:07.165 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T13:42:07.120+0000 7f1f4f584d80 4 rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 2026-03-10T13:42:07.165 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T13:42:07.120+0000 7f1f4f584d80 4 rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 2026-03-10T13:42:07.165 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T13:42:07.120+0000 7f1f4f584d80 4 rocksdb: Options.blob_compaction_readahead_size: 0 2026-03-10T13:42:07.165 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T13:42:07.120+0000 7f1f4f584d80 4 rocksdb: Options.blob_file_starting_level: 0 2026-03-10T13:42:07.165 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T13:42:07.120+0000 7f1f4f584d80 4 rocksdb: Options.experimental_mempurge_threshold: 0.000000 2026-03-10T13:42:07.165 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T13:42:07.120+0000 7f1f4f584d80 4 rocksdb: [db/version_set.cc:5566] Recovered from manifest file:/var/lib/ceph/mon/ceph-a/store.db/MANIFEST-000001 succeeded,manifest_file_number is 1, next_file_number is 3, last_sequence is 0, log_number is 0,prev_log_number is 0,max_column_family is 0,min_log_number_to_keep is 0 2026-03-10T13:42:07.165 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr 2026-03-10T13:42:07.165 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T13:42:07.120+0000 7f1f4f584d80 4 rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 0 2026-03-10T13:42:07.165 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr 2026-03-10T13:42:07.165 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T13:42:07.120+0000 7f1f4f584d80 4 rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: d653a3a9-80e8-489f-9e64-469e2d920817 2026-03-10T13:42:07.165 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr 2026-03-10T13:42:07.165 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T13:42:07.120+0000 7f1f4f584d80 4 rocksdb: [db/version_set.cc:5047] Creating manifest 5 2026-03-10T13:42:07.165 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr 2026-03-10T13:42:07.165 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T13:42:07.124+0000 7f1f4f584d80 4 rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x55ad883e0e00 2026-03-10T13:42:07.165 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T13:42:07.124+0000 7f1f4f584d80 4 rocksdb: DB pointer 0x55ad884c4000 2026-03-10T13:42:07.165 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T13:42:07.124+0000 7f1f46d0e640 4 rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS ------- 2026-03-10T13:42:07.165 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T13:42:07.124+0000 7f1f46d0e640 4 rocksdb: [db/db_impl/db_impl.cc:1111] 2026-03-10T13:42:07.165 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr ** DB Stats ** 2026-03-10T13:42:07.165 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr Uptime(secs): 0.0 total, 0.0 interval 2026-03-10T13:42:07.165 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s 2026-03-10T13:42:07.165 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s 2026-03-10T13:42:07.165 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent 2026-03-10T13:42:07.165 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s 2026-03-10T13:42:07.165 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s 2026-03-10T13:42:07.165 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr Interval stall: 00:00:0.000 H:M:S, 0.0 percent 2026-03-10T13:42:07.165 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr 2026-03-10T13:42:07.165 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr ** Compaction Stats [default] ** 2026-03-10T13:42:07.165 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr Level Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB) 2026-03-10T13:42:07.165 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ 2026-03-10T13:42:07.165 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr Sum 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.00 0.00 0 0.000 0 0 0.0 0.0 2026-03-10T13:42:07.165 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr Int 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.00 0.00 0 0.000 0 0 0.0 0.0 2026-03-10T13:42:07.165 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr 2026-03-10T13:42:07.165 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr ** Compaction Stats [default] ** 2026-03-10T13:42:07.165 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr Priority Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB) 2026-03-10T13:42:07.165 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- 2026-03-10T13:42:07.165 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr 2026-03-10T13:42:07.165 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0 2026-03-10T13:42:07.165 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr 2026-03-10T13:42:07.165 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr Uptime(secs): 0.0 total, 0.0 interval 2026-03-10T13:42:07.165 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr Flush(GB): cumulative 0.000, interval 0.000 2026-03-10T13:42:07.165 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr AddFile(GB): cumulative 0.000, interval 0.000 2026-03-10T13:42:07.165 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr AddFile(Total Files): cumulative 0, interval 0 2026-03-10T13:42:07.165 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr AddFile(L0 Files): cumulative 0, interval 0 2026-03-10T13:42:07.165 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr AddFile(Keys): cumulative 0, interval 0 2026-03-10T13:42:07.165 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds 2026-03-10T13:42:07.166 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds 2026-03-10T13:42:07.166 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count 2026-03-10T13:42:07.166 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr Block cache BinnedLRUCache@0x55ad883df350#7 capacity: 512.00 MB usage: 0.00 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 0 last_secs: 8e-06 secs_since: 0 2026-03-10T13:42:07.166 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr Block cache entry stats(count,size,portion): Misc(1,0.00 KB,0%) 2026-03-10T13:42:07.166 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr 2026-03-10T13:42:07.166 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr ** File Read Latency Histogram By Level [default] ** 2026-03-10T13:42:07.166 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr 2026-03-10T13:42:07.166 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T13:42:07.124+0000 7f1f4f584d80 4 rocksdb: [db/db_impl/db_impl.cc:496] Shutdown: canceling all background work 2026-03-10T13:42:07.166 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T13:42:07.124+0000 7f1f4f584d80 4 rocksdb: [db/db_impl/db_impl.cc:704] Shutdown complete 2026-03-10T13:42:07.166 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-10T13:42:07.124+0000 7f1f4f584d80 0 /usr/bin/ceph-mon: created monfs at /var/lib/ceph/mon/ceph-a for mon.a 2026-03-10T13:42:07.166 INFO:teuthology.orchestra.run.vm00.stdout:create mon.a on 2026-03-10T13:42:07.339 INFO:teuthology.orchestra.run.vm00.stdout:systemctl: stderr Removed /etc/systemd/system/multi-user.target.wants/ceph.target. 2026-03-10T13:42:07.527 INFO:teuthology.orchestra.run.vm00.stdout:systemctl: stderr Created symlink /etc/systemd/system/multi-user.target.wants/ceph.target → /etc/systemd/system/ceph.target. 2026-03-10T13:42:07.688 INFO:teuthology.orchestra.run.vm00.stdout:systemctl: stderr Created symlink /etc/systemd/system/multi-user.target.wants/ceph-c9620084-1c86-11f1-bcc5-e3fb709eab0a.target → /etc/systemd/system/ceph-c9620084-1c86-11f1-bcc5-e3fb709eab0a.target. 2026-03-10T13:42:07.688 INFO:teuthology.orchestra.run.vm00.stdout:systemctl: stderr Created symlink /etc/systemd/system/ceph.target.wants/ceph-c9620084-1c86-11f1-bcc5-e3fb709eab0a.target → /etc/systemd/system/ceph-c9620084-1c86-11f1-bcc5-e3fb709eab0a.target. 2026-03-10T13:42:07.967 INFO:teuthology.orchestra.run.vm00.stdout:Non-zero exit code 1 from systemctl reset-failed ceph-c9620084-1c86-11f1-bcc5-e3fb709eab0a@mon.a 2026-03-10T13:42:07.967 INFO:teuthology.orchestra.run.vm00.stdout:systemctl: stderr Failed to reset failed state of unit ceph-c9620084-1c86-11f1-bcc5-e3fb709eab0a@mon.a.service: Unit ceph-c9620084-1c86-11f1-bcc5-e3fb709eab0a@mon.a.service not loaded. 2026-03-10T13:42:08.142 INFO:teuthology.orchestra.run.vm00.stdout:systemctl: stderr Created symlink /etc/systemd/system/ceph-c9620084-1c86-11f1-bcc5-e3fb709eab0a.target.wants/ceph-c9620084-1c86-11f1-bcc5-e3fb709eab0a@mon.a.service → /etc/systemd/system/ceph-c9620084-1c86-11f1-bcc5-e3fb709eab0a@.service. 2026-03-10T13:42:08.280 INFO:teuthology.orchestra.run.vm00.stdout:firewalld does not appear to be present 2026-03-10T13:42:08.280 INFO:teuthology.orchestra.run.vm00.stdout:Not possible to enable service . firewalld.service is not available 2026-03-10T13:42:08.280 INFO:teuthology.orchestra.run.vm00.stdout:Waiting for mon to start... 2026-03-10T13:42:08.280 INFO:teuthology.orchestra.run.vm00.stdout:Waiting for mon... 2026-03-10T13:42:08.466 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:08 vm00 systemd[1]: Started Ceph mon.a for c9620084-1c86-11f1-bcc5-e3fb709eab0a. 2026-03-10T13:42:08.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:08 vm00 bash[20259]: cluster 2026-03-10T13:42:08.642278+0000 mon.a (mon.0) 0 : cluster [INF] mkfs c9620084-1c86-11f1-bcc5-e3fb709eab0a 2026-03-10T13:42:08.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:08 vm00 bash[20259]: cluster 2026-03-10T13:42:08.642278+0000 mon.a (mon.0) 0 : cluster [INF] mkfs c9620084-1c86-11f1-bcc5-e3fb709eab0a 2026-03-10T13:42:08.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:08 vm00 bash[20259]: cluster 2026-03-10T13:42:08.635678+0000 mon.a (mon.0) 1 : cluster [INF] mon.a is new leader, mons a in quorum (ranks 0) 2026-03-10T13:42:08.879 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:08 vm00 bash[20259]: cluster 2026-03-10T13:42:08.635678+0000 mon.a (mon.0) 1 : cluster [INF] mon.a is new leader, mons a in quorum (ranks 0) 2026-03-10T13:42:09.037 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout cluster: 2026-03-10T13:42:09.038 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout id: c9620084-1c86-11f1-bcc5-e3fb709eab0a 2026-03-10T13:42:09.038 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout health: HEALTH_OK 2026-03-10T13:42:09.038 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout 2026-03-10T13:42:09.038 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout services: 2026-03-10T13:42:09.038 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout mon: 1 daemons, quorum a (age 0.0764196s) 2026-03-10T13:42:09.038 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout mgr: no daemons active 2026-03-10T13:42:09.038 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout osd: 0 osds: 0 up, 0 in 2026-03-10T13:42:09.038 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout 2026-03-10T13:42:09.038 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout data: 2026-03-10T13:42:09.038 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout pools: 0 pools, 0 pgs 2026-03-10T13:42:09.038 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout objects: 0 objects, 0 B 2026-03-10T13:42:09.038 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout usage: 0 B used, 0 B / 0 B avail 2026-03-10T13:42:09.038 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout pgs: 2026-03-10T13:42:09.038 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout 2026-03-10T13:42:09.038 INFO:teuthology.orchestra.run.vm00.stdout:mon is available 2026-03-10T13:42:09.038 INFO:teuthology.orchestra.run.vm00.stdout:Assimilating anything we can from ceph.conf... 2026-03-10T13:42:09.309 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout 2026-03-10T13:42:09.309 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout [global] 2026-03-10T13:42:09.309 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout fsid = c9620084-1c86-11f1-bcc5-e3fb709eab0a 2026-03-10T13:42:09.309 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout mon_cluster_log_file_level = debug 2026-03-10T13:42:09.309 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout mon_host = [v2:192.168.123.100:3300,v1:192.168.123.100:6789] 2026-03-10T13:42:09.309 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout mon_osd_allow_pg_remap = true 2026-03-10T13:42:09.309 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout mon_osd_allow_primary_affinity = true 2026-03-10T13:42:09.309 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout mon_warn_on_no_sortbitwise = false 2026-03-10T13:42:09.309 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout osd_crush_chooseleaf_type = 0 2026-03-10T13:42:09.309 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout 2026-03-10T13:42:09.309 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout [mgr] 2026-03-10T13:42:09.309 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout mgr/cephadm/use_agent = False 2026-03-10T13:42:09.309 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout mgr/telemetry/nag = false 2026-03-10T13:42:09.309 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout 2026-03-10T13:42:09.309 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout [osd] 2026-03-10T13:42:09.309 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout osd_map_max_advance = 10 2026-03-10T13:42:09.309 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout osd_sloppy_crc = true 2026-03-10T13:42:09.309 INFO:teuthology.orchestra.run.vm00.stdout:Generating new minimal ceph.conf... 2026-03-10T13:42:09.976 INFO:teuthology.orchestra.run.vm00.stdout:Restarting the monitor... 2026-03-10T13:42:09.980 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:09 vm00 bash[20259]: cluster 2026-03-10T13:42:08.641147+0000 mon.a (mon.0) 2 : cluster [INF] mon.a is new leader, mons a in quorum (ranks 0) 2026-03-10T13:42:09.980 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:09 vm00 bash[20259]: cluster 2026-03-10T13:42:08.641147+0000 mon.a (mon.0) 2 : cluster [INF] mon.a is new leader, mons a in quorum (ranks 0) 2026-03-10T13:42:09.980 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:09 vm00 bash[20259]: cluster 2026-03-10T13:42:08.641524+0000 mon.a (mon.0) 3 : cluster [DBG] monmap epoch 1 2026-03-10T13:42:09.980 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:09 vm00 bash[20259]: cluster 2026-03-10T13:42:08.641524+0000 mon.a (mon.0) 3 : cluster [DBG] monmap epoch 1 2026-03-10T13:42:09.980 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:09 vm00 bash[20259]: cluster 2026-03-10T13:42:08.641529+0000 mon.a (mon.0) 4 : cluster [DBG] fsid c9620084-1c86-11f1-bcc5-e3fb709eab0a 2026-03-10T13:42:09.980 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:09 vm00 bash[20259]: cluster 2026-03-10T13:42:08.641529+0000 mon.a (mon.0) 4 : cluster [DBG] fsid c9620084-1c86-11f1-bcc5-e3fb709eab0a 2026-03-10T13:42:09.980 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:09 vm00 bash[20259]: cluster 2026-03-10T13:42:08.641532+0000 mon.a (mon.0) 5 : cluster [DBG] last_changed 2026-03-10T13:42:07.014183+0000 2026-03-10T13:42:09.980 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:09 vm00 bash[20259]: cluster 2026-03-10T13:42:08.641532+0000 mon.a (mon.0) 5 : cluster [DBG] last_changed 2026-03-10T13:42:07.014183+0000 2026-03-10T13:42:09.981 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:09 vm00 bash[20259]: cluster 2026-03-10T13:42:08.641535+0000 mon.a (mon.0) 6 : cluster [DBG] created 2026-03-10T13:42:07.014183+0000 2026-03-10T13:42:09.981 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:09 vm00 bash[20259]: cluster 2026-03-10T13:42:08.641535+0000 mon.a (mon.0) 6 : cluster [DBG] created 2026-03-10T13:42:07.014183+0000 2026-03-10T13:42:09.981 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:09 vm00 bash[20259]: cluster 2026-03-10T13:42:08.641538+0000 mon.a (mon.0) 7 : cluster [DBG] min_mon_release 19 (squid) 2026-03-10T13:42:09.981 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:09 vm00 bash[20259]: cluster 2026-03-10T13:42:08.641538+0000 mon.a (mon.0) 7 : cluster [DBG] min_mon_release 19 (squid) 2026-03-10T13:42:09.981 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:09 vm00 bash[20259]: cluster 2026-03-10T13:42:08.641540+0000 mon.a (mon.0) 8 : cluster [DBG] election_strategy: 1 2026-03-10T13:42:09.981 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:09 vm00 bash[20259]: cluster 2026-03-10T13:42:08.641540+0000 mon.a (mon.0) 8 : cluster [DBG] election_strategy: 1 2026-03-10T13:42:09.981 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:09 vm00 bash[20259]: cluster 2026-03-10T13:42:08.641543+0000 mon.a (mon.0) 9 : cluster [DBG] 0: [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] mon.a 2026-03-10T13:42:09.981 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:09 vm00 bash[20259]: cluster 2026-03-10T13:42:08.641543+0000 mon.a (mon.0) 9 : cluster [DBG] 0: [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] mon.a 2026-03-10T13:42:09.981 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:09 vm00 bash[20259]: cluster 2026-03-10T13:42:08.644031+0000 mon.a (mon.0) 10 : cluster [DBG] fsmap 2026-03-10T13:42:09.981 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:09 vm00 bash[20259]: cluster 2026-03-10T13:42:08.644031+0000 mon.a (mon.0) 10 : cluster [DBG] fsmap 2026-03-10T13:42:09.981 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:09 vm00 bash[20259]: cluster 2026-03-10T13:42:08.649369+0000 mon.a (mon.0) 11 : cluster [DBG] osdmap e1: 0 total, 0 up, 0 in 2026-03-10T13:42:09.981 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:09 vm00 bash[20259]: cluster 2026-03-10T13:42:08.649369+0000 mon.a (mon.0) 11 : cluster [DBG] osdmap e1: 0 total, 0 up, 0 in 2026-03-10T13:42:09.981 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:09 vm00 bash[20259]: cluster 2026-03-10T13:42:08.651165+0000 mon.a (mon.0) 12 : cluster [DBG] mgrmap e1: no daemons active 2026-03-10T13:42:09.981 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:09 vm00 bash[20259]: cluster 2026-03-10T13:42:08.651165+0000 mon.a (mon.0) 12 : cluster [DBG] mgrmap e1: no daemons active 2026-03-10T13:42:09.981 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:09 vm00 bash[20259]: audit 2026-03-10T13:42:08.717487+0000 mon.a (mon.0) 13 : audit [DBG] from='client.? 192.168.123.100:0/674371790' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch 2026-03-10T13:42:09.981 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:09 vm00 bash[20259]: audit 2026-03-10T13:42:08.717487+0000 mon.a (mon.0) 13 : audit [DBG] from='client.? 192.168.123.100:0/674371790' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch 2026-03-10T13:42:09.981 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:09 vm00 bash[20259]: audit 2026-03-10T13:42:09.261396+0000 mon.a (mon.0) 14 : audit [INF] from='client.? 192.168.123.100:0/1748079503' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch 2026-03-10T13:42:09.981 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:09 vm00 bash[20259]: audit 2026-03-10T13:42:09.261396+0000 mon.a (mon.0) 14 : audit [INF] from='client.? 192.168.123.100:0/1748079503' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch 2026-03-10T13:42:09.981 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:09 vm00 bash[20259]: audit 2026-03-10T13:42:09.264233+0000 mon.a (mon.0) 15 : audit [INF] from='client.? 192.168.123.100:0/1748079503' entity='client.admin' cmd='[{"prefix": "config assimilate-conf"}]': finished 2026-03-10T13:42:09.981 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:09 vm00 bash[20259]: audit 2026-03-10T13:42:09.264233+0000 mon.a (mon.0) 15 : audit [INF] from='client.? 192.168.123.100:0/1748079503' entity='client.admin' cmd='[{"prefix": "config assimilate-conf"}]': finished 2026-03-10T13:42:10.282 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:09 vm00 systemd[1]: Stopping Ceph mon.a for c9620084-1c86-11f1-bcc5-e3fb709eab0a... 2026-03-10T13:42:10.282 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:10 vm00 bash[20259]: debug 2026-03-10T13:42:10.052+0000 7f67f4037640 -1 received signal: Terminated from /sbin/docker-init -- /usr/bin/ceph-mon -n mon.a -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-stderr=true --default-log-stderr-prefix=debug --default-mon-cluster-log-to-file=false --default-mon-cluster-log-to-stderr=true (PID: 1) UID: 0 2026-03-10T13:42:10.282 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:10 vm00 bash[20259]: debug 2026-03-10T13:42:10.052+0000 7f67f4037640 -1 mon.a@0(leader) e1 *** Got Signal Terminated *** 2026-03-10T13:42:10.340 INFO:teuthology.orchestra.run.vm00.stdout:Setting public_network to 192.168.123.1/32,192.168.123.0/24 in mon config section 2026-03-10T13:42:10.569 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:10 vm00 bash[20660]: ceph-c9620084-1c86-11f1-bcc5-e3fb709eab0a-mon-a 2026-03-10T13:42:10.569 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:10 vm00 systemd[1]: ceph-c9620084-1c86-11f1-bcc5-e3fb709eab0a@mon.a.service: Deactivated successfully. 2026-03-10T13:42:10.569 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:10 vm00 systemd[1]: Stopped Ceph mon.a for c9620084-1c86-11f1-bcc5-e3fb709eab0a. 2026-03-10T13:42:10.569 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:10 vm00 systemd[1]: Started Ceph mon.a for c9620084-1c86-11f1-bcc5-e3fb709eab0a. 2026-03-10T13:42:10.569 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:10 vm00 bash[20748]: debug 2026-03-10T13:42:10.448+0000 7f6f30c30d80 0 set uid:gid to 167:167 (ceph:ceph) 2026-03-10T13:42:10.569 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:10 vm00 bash[20748]: debug 2026-03-10T13:42:10.448+0000 7f6f30c30d80 0 ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable), process ceph-mon, pid 8 2026-03-10T13:42:10.569 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:10 vm00 bash[20748]: debug 2026-03-10T13:42:10.448+0000 7f6f30c30d80 0 pidfile_write: ignore empty --pid-file 2026-03-10T13:42:10.569 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:10 vm00 bash[20748]: debug 2026-03-10T13:42:10.452+0000 7f6f30c30d80 0 load: jerasure load: lrc 2026-03-10T13:42:10.569 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:10 vm00 bash[20748]: debug 2026-03-10T13:42:10.452+0000 7f6f30c30d80 4 rocksdb: RocksDB version: 7.9.2 2026-03-10T13:42:10.569 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:10 vm00 bash[20748]: debug 2026-03-10T13:42:10.452+0000 7f6f30c30d80 4 rocksdb: Git sha 0 2026-03-10T13:42:10.569 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:10 vm00 bash[20748]: debug 2026-03-10T13:42:10.452+0000 7f6f30c30d80 4 rocksdb: Compile date 2026-02-25 18:11:04 2026-03-10T13:42:10.569 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:10 vm00 bash[20748]: debug 2026-03-10T13:42:10.452+0000 7f6f30c30d80 4 rocksdb: DB SUMMARY 2026-03-10T13:42:10.569 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:10 vm00 bash[20748]: debug 2026-03-10T13:42:10.452+0000 7f6f30c30d80 4 rocksdb: DB Session ID: 5TB0XU03QP3YED0WVSTQ 2026-03-10T13:42:10.569 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:10 vm00 bash[20748]: debug 2026-03-10T13:42:10.452+0000 7f6f30c30d80 4 rocksdb: CURRENT file: CURRENT 2026-03-10T13:42:10.569 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:10 vm00 bash[20748]: debug 2026-03-10T13:42:10.452+0000 7f6f30c30d80 4 rocksdb: IDENTITY file: IDENTITY 2026-03-10T13:42:10.569 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:10 vm00 bash[20748]: debug 2026-03-10T13:42:10.452+0000 7f6f30c30d80 4 rocksdb: MANIFEST file: MANIFEST-000010 size: 179 Bytes 2026-03-10T13:42:10.569 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:10 vm00 bash[20748]: debug 2026-03-10T13:42:10.452+0000 7f6f30c30d80 4 rocksdb: SST files in /var/lib/ceph/mon/ceph-a/store.db dir, Total Num: 1, files: 000008.sst 2026-03-10T13:42:10.569 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:10 vm00 bash[20748]: debug 2026-03-10T13:42:10.452+0000 7f6f30c30d80 4 rocksdb: Write Ahead Log file in /var/lib/ceph/mon/ceph-a/store.db: 000009.log size: 86905 ; 2026-03-10T13:42:10.569 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:10 vm00 bash[20748]: debug 2026-03-10T13:42:10.452+0000 7f6f30c30d80 4 rocksdb: Options.error_if_exists: 0 2026-03-10T13:42:10.569 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:10 vm00 bash[20748]: debug 2026-03-10T13:42:10.452+0000 7f6f30c30d80 4 rocksdb: Options.create_if_missing: 0 2026-03-10T13:42:10.569 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:10 vm00 bash[20748]: debug 2026-03-10T13:42:10.452+0000 7f6f30c30d80 4 rocksdb: Options.paranoid_checks: 1 2026-03-10T13:42:10.569 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:10 vm00 bash[20748]: debug 2026-03-10T13:42:10.452+0000 7f6f30c30d80 4 rocksdb: Options.flush_verify_memtable_count: 1 2026-03-10T13:42:10.569 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:10 vm00 bash[20748]: debug 2026-03-10T13:42:10.452+0000 7f6f30c30d80 4 rocksdb: Options.track_and_verify_wals_in_manifest: 0 2026-03-10T13:42:10.569 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:10 vm00 bash[20748]: debug 2026-03-10T13:42:10.452+0000 7f6f30c30d80 4 rocksdb: Options.verify_sst_unique_id_in_manifest: 1 2026-03-10T13:42:10.569 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:10 vm00 bash[20748]: debug 2026-03-10T13:42:10.452+0000 7f6f30c30d80 4 rocksdb: Options.env: 0x55dac9c06dc0 2026-03-10T13:42:10.569 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:10 vm00 bash[20748]: debug 2026-03-10T13:42:10.452+0000 7f6f30c30d80 4 rocksdb: Options.fs: PosixFileSystem 2026-03-10T13:42:10.569 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:10 vm00 bash[20748]: debug 2026-03-10T13:42:10.452+0000 7f6f30c30d80 4 rocksdb: Options.info_log: 0x55db06762700 2026-03-10T13:42:10.569 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:10 vm00 bash[20748]: debug 2026-03-10T13:42:10.452+0000 7f6f30c30d80 4 rocksdb: Options.max_file_opening_threads: 16 2026-03-10T13:42:10.569 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:10 vm00 bash[20748]: debug 2026-03-10T13:42:10.452+0000 7f6f30c30d80 4 rocksdb: Options.statistics: (nil) 2026-03-10T13:42:10.569 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:10 vm00 bash[20748]: debug 2026-03-10T13:42:10.452+0000 7f6f30c30d80 4 rocksdb: Options.use_fsync: 0 2026-03-10T13:42:10.569 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:10 vm00 bash[20748]: debug 2026-03-10T13:42:10.452+0000 7f6f30c30d80 4 rocksdb: Options.max_log_file_size: 0 2026-03-10T13:42:10.569 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:10 vm00 bash[20748]: debug 2026-03-10T13:42:10.452+0000 7f6f30c30d80 4 rocksdb: Options.max_manifest_file_size: 1073741824 2026-03-10T13:42:10.569 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:10 vm00 bash[20748]: debug 2026-03-10T13:42:10.452+0000 7f6f30c30d80 4 rocksdb: Options.log_file_time_to_roll: 0 2026-03-10T13:42:10.569 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:10 vm00 bash[20748]: debug 2026-03-10T13:42:10.452+0000 7f6f30c30d80 4 rocksdb: Options.keep_log_file_num: 1000 2026-03-10T13:42:10.569 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:10 vm00 bash[20748]: debug 2026-03-10T13:42:10.452+0000 7f6f30c30d80 4 rocksdb: Options.recycle_log_file_num: 0 2026-03-10T13:42:10.569 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:10 vm00 bash[20748]: debug 2026-03-10T13:42:10.452+0000 7f6f30c30d80 4 rocksdb: Options.allow_fallocate: 1 2026-03-10T13:42:10.569 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:10 vm00 bash[20748]: debug 2026-03-10T13:42:10.452+0000 7f6f30c30d80 4 rocksdb: Options.allow_mmap_reads: 0 2026-03-10T13:42:10.569 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:10 vm00 bash[20748]: debug 2026-03-10T13:42:10.452+0000 7f6f30c30d80 4 rocksdb: Options.allow_mmap_writes: 0 2026-03-10T13:42:10.569 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:10 vm00 bash[20748]: debug 2026-03-10T13:42:10.452+0000 7f6f30c30d80 4 rocksdb: Options.use_direct_reads: 0 2026-03-10T13:42:10.569 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:10 vm00 bash[20748]: debug 2026-03-10T13:42:10.452+0000 7f6f30c30d80 4 rocksdb: Options.use_direct_io_for_flush_and_compaction: 0 2026-03-10T13:42:10.570 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:10 vm00 bash[20748]: debug 2026-03-10T13:42:10.452+0000 7f6f30c30d80 4 rocksdb: Options.create_missing_column_families: 0 2026-03-10T13:42:10.570 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:10 vm00 bash[20748]: debug 2026-03-10T13:42:10.452+0000 7f6f30c30d80 4 rocksdb: Options.db_log_dir: 2026-03-10T13:42:10.570 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:10 vm00 bash[20748]: debug 2026-03-10T13:42:10.452+0000 7f6f30c30d80 4 rocksdb: Options.wal_dir: 2026-03-10T13:42:10.570 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:10 vm00 bash[20748]: debug 2026-03-10T13:42:10.452+0000 7f6f30c30d80 4 rocksdb: Options.table_cache_numshardbits: 6 2026-03-10T13:42:10.570 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:10 vm00 bash[20748]: debug 2026-03-10T13:42:10.452+0000 7f6f30c30d80 4 rocksdb: Options.WAL_ttl_seconds: 0 2026-03-10T13:42:10.570 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:10 vm00 bash[20748]: debug 2026-03-10T13:42:10.452+0000 7f6f30c30d80 4 rocksdb: Options.WAL_size_limit_MB: 0 2026-03-10T13:42:10.570 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:10 vm00 bash[20748]: debug 2026-03-10T13:42:10.452+0000 7f6f30c30d80 4 rocksdb: Options.max_write_batch_group_size_bytes: 1048576 2026-03-10T13:42:10.570 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:10 vm00 bash[20748]: debug 2026-03-10T13:42:10.452+0000 7f6f30c30d80 4 rocksdb: Options.manifest_preallocation_size: 4194304 2026-03-10T13:42:10.570 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:10 vm00 bash[20748]: debug 2026-03-10T13:42:10.452+0000 7f6f30c30d80 4 rocksdb: Options.is_fd_close_on_exec: 1 2026-03-10T13:42:10.570 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:10 vm00 bash[20748]: debug 2026-03-10T13:42:10.452+0000 7f6f30c30d80 4 rocksdb: Options.advise_random_on_open: 1 2026-03-10T13:42:10.570 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:10 vm00 bash[20748]: debug 2026-03-10T13:42:10.452+0000 7f6f30c30d80 4 rocksdb: Options.db_write_buffer_size: 0 2026-03-10T13:42:10.570 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:10 vm00 bash[20748]: debug 2026-03-10T13:42:10.452+0000 7f6f30c30d80 4 rocksdb: Options.write_buffer_manager: 0x55db06767900 2026-03-10T13:42:10.570 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:10 vm00 bash[20748]: debug 2026-03-10T13:42:10.452+0000 7f6f30c30d80 4 rocksdb: Options.access_hint_on_compaction_start: 1 2026-03-10T13:42:10.570 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:10 vm00 bash[20748]: debug 2026-03-10T13:42:10.452+0000 7f6f30c30d80 4 rocksdb: Options.random_access_max_buffer_size: 1048576 2026-03-10T13:42:10.570 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:10 vm00 bash[20748]: debug 2026-03-10T13:42:10.452+0000 7f6f30c30d80 4 rocksdb: Options.use_adaptive_mutex: 0 2026-03-10T13:42:10.570 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:10 vm00 bash[20748]: debug 2026-03-10T13:42:10.452+0000 7f6f30c30d80 4 rocksdb: Options.rate_limiter: (nil) 2026-03-10T13:42:10.570 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:10 vm00 bash[20748]: debug 2026-03-10T13:42:10.452+0000 7f6f30c30d80 4 rocksdb: Options.sst_file_manager.rate_bytes_per_sec: 0 2026-03-10T13:42:10.570 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:10 vm00 bash[20748]: debug 2026-03-10T13:42:10.452+0000 7f6f30c30d80 4 rocksdb: Options.wal_recovery_mode: 2 2026-03-10T13:42:10.570 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:10 vm00 bash[20748]: debug 2026-03-10T13:42:10.452+0000 7f6f30c30d80 4 rocksdb: Options.enable_thread_tracking: 0 2026-03-10T13:42:10.570 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:10 vm00 bash[20748]: debug 2026-03-10T13:42:10.452+0000 7f6f30c30d80 4 rocksdb: Options.enable_pipelined_write: 0 2026-03-10T13:42:10.570 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:10 vm00 bash[20748]: debug 2026-03-10T13:42:10.452+0000 7f6f30c30d80 4 rocksdb: Options.unordered_write: 0 2026-03-10T13:42:10.570 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:10 vm00 bash[20748]: debug 2026-03-10T13:42:10.452+0000 7f6f30c30d80 4 rocksdb: Options.allow_concurrent_memtable_write: 1 2026-03-10T13:42:10.570 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:10 vm00 bash[20748]: debug 2026-03-10T13:42:10.452+0000 7f6f30c30d80 4 rocksdb: Options.enable_write_thread_adaptive_yield: 1 2026-03-10T13:42:10.570 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:10 vm00 bash[20748]: debug 2026-03-10T13:42:10.452+0000 7f6f30c30d80 4 rocksdb: Options.write_thread_max_yield_usec: 100 2026-03-10T13:42:10.570 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:10 vm00 bash[20748]: debug 2026-03-10T13:42:10.452+0000 7f6f30c30d80 4 rocksdb: Options.write_thread_slow_yield_usec: 3 2026-03-10T13:42:10.570 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:10 vm00 bash[20748]: debug 2026-03-10T13:42:10.452+0000 7f6f30c30d80 4 rocksdb: Options.row_cache: None 2026-03-10T13:42:10.570 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:10 vm00 bash[20748]: debug 2026-03-10T13:42:10.452+0000 7f6f30c30d80 4 rocksdb: Options.wal_filter: None 2026-03-10T13:42:10.570 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:10 vm00 bash[20748]: debug 2026-03-10T13:42:10.452+0000 7f6f30c30d80 4 rocksdb: Options.avoid_flush_during_recovery: 0 2026-03-10T13:42:10.570 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:10 vm00 bash[20748]: debug 2026-03-10T13:42:10.452+0000 7f6f30c30d80 4 rocksdb: Options.allow_ingest_behind: 0 2026-03-10T13:42:10.570 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:10 vm00 bash[20748]: debug 2026-03-10T13:42:10.452+0000 7f6f30c30d80 4 rocksdb: Options.two_write_queues: 0 2026-03-10T13:42:10.570 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:10 vm00 bash[20748]: debug 2026-03-10T13:42:10.452+0000 7f6f30c30d80 4 rocksdb: Options.manual_wal_flush: 0 2026-03-10T13:42:10.570 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:10 vm00 bash[20748]: debug 2026-03-10T13:42:10.452+0000 7f6f30c30d80 4 rocksdb: Options.wal_compression: 0 2026-03-10T13:42:10.570 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:10 vm00 bash[20748]: debug 2026-03-10T13:42:10.452+0000 7f6f30c30d80 4 rocksdb: Options.atomic_flush: 0 2026-03-10T13:42:10.570 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:10 vm00 bash[20748]: debug 2026-03-10T13:42:10.452+0000 7f6f30c30d80 4 rocksdb: Options.avoid_unnecessary_blocking_io: 0 2026-03-10T13:42:10.570 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:10 vm00 bash[20748]: debug 2026-03-10T13:42:10.452+0000 7f6f30c30d80 4 rocksdb: Options.persist_stats_to_disk: 0 2026-03-10T13:42:10.570 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:10 vm00 bash[20748]: debug 2026-03-10T13:42:10.452+0000 7f6f30c30d80 4 rocksdb: Options.write_dbid_to_manifest: 0 2026-03-10T13:42:10.570 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:10 vm00 bash[20748]: debug 2026-03-10T13:42:10.452+0000 7f6f30c30d80 4 rocksdb: Options.log_readahead_size: 0 2026-03-10T13:42:10.570 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:10 vm00 bash[20748]: debug 2026-03-10T13:42:10.452+0000 7f6f30c30d80 4 rocksdb: Options.file_checksum_gen_factory: Unknown 2026-03-10T13:42:10.570 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:10 vm00 bash[20748]: debug 2026-03-10T13:42:10.452+0000 7f6f30c30d80 4 rocksdb: Options.best_efforts_recovery: 0 2026-03-10T13:42:10.570 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:10 vm00 bash[20748]: debug 2026-03-10T13:42:10.452+0000 7f6f30c30d80 4 rocksdb: Options.max_bgerror_resume_count: 2147483647 2026-03-10T13:42:10.570 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:10 vm00 bash[20748]: debug 2026-03-10T13:42:10.452+0000 7f6f30c30d80 4 rocksdb: Options.bgerror_resume_retry_interval: 1000000 2026-03-10T13:42:10.570 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:10 vm00 bash[20748]: debug 2026-03-10T13:42:10.452+0000 7f6f30c30d80 4 rocksdb: Options.allow_data_in_errors: 0 2026-03-10T13:42:10.570 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:10 vm00 bash[20748]: debug 2026-03-10T13:42:10.452+0000 7f6f30c30d80 4 rocksdb: Options.db_host_id: __hostname__ 2026-03-10T13:42:10.570 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:10 vm00 bash[20748]: debug 2026-03-10T13:42:10.452+0000 7f6f30c30d80 4 rocksdb: Options.enforce_single_del_contracts: true 2026-03-10T13:42:10.570 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:10 vm00 bash[20748]: debug 2026-03-10T13:42:10.452+0000 7f6f30c30d80 4 rocksdb: Options.max_background_jobs: 2 2026-03-10T13:42:10.570 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:10 vm00 bash[20748]: debug 2026-03-10T13:42:10.452+0000 7f6f30c30d80 4 rocksdb: Options.max_background_compactions: -1 2026-03-10T13:42:10.570 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:10 vm00 bash[20748]: debug 2026-03-10T13:42:10.452+0000 7f6f30c30d80 4 rocksdb: Options.max_subcompactions: 1 2026-03-10T13:42:10.570 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:10 vm00 bash[20748]: debug 2026-03-10T13:42:10.452+0000 7f6f30c30d80 4 rocksdb: Options.avoid_flush_during_shutdown: 0 2026-03-10T13:42:10.570 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:10 vm00 bash[20748]: debug 2026-03-10T13:42:10.452+0000 7f6f30c30d80 4 rocksdb: Options.writable_file_max_buffer_size: 1048576 2026-03-10T13:42:10.570 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:10 vm00 bash[20748]: debug 2026-03-10T13:42:10.452+0000 7f6f30c30d80 4 rocksdb: Options.delayed_write_rate : 16777216 2026-03-10T13:42:10.570 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:10 vm00 bash[20748]: debug 2026-03-10T13:42:10.452+0000 7f6f30c30d80 4 rocksdb: Options.max_total_wal_size: 0 2026-03-10T13:42:10.570 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:10 vm00 bash[20748]: debug 2026-03-10T13:42:10.452+0000 7f6f30c30d80 4 rocksdb: Options.delete_obsolete_files_period_micros: 21600000000 2026-03-10T13:42:10.571 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:10 vm00 bash[20748]: debug 2026-03-10T13:42:10.452+0000 7f6f30c30d80 4 rocksdb: Options.stats_dump_period_sec: 600 2026-03-10T13:42:10.571 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:10 vm00 bash[20748]: debug 2026-03-10T13:42:10.452+0000 7f6f30c30d80 4 rocksdb: Options.stats_persist_period_sec: 600 2026-03-10T13:42:10.571 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:10 vm00 bash[20748]: debug 2026-03-10T13:42:10.452+0000 7f6f30c30d80 4 rocksdb: Options.stats_history_buffer_size: 1048576 2026-03-10T13:42:10.571 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:10 vm00 bash[20748]: debug 2026-03-10T13:42:10.452+0000 7f6f30c30d80 4 rocksdb: Options.max_open_files: -1 2026-03-10T13:42:10.571 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:10 vm00 bash[20748]: debug 2026-03-10T13:42:10.452+0000 7f6f30c30d80 4 rocksdb: Options.bytes_per_sync: 0 2026-03-10T13:42:10.571 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:10 vm00 bash[20748]: debug 2026-03-10T13:42:10.452+0000 7f6f30c30d80 4 rocksdb: Options.wal_bytes_per_sync: 0 2026-03-10T13:42:10.571 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:10 vm00 bash[20748]: debug 2026-03-10T13:42:10.452+0000 7f6f30c30d80 4 rocksdb: Options.strict_bytes_per_sync: 0 2026-03-10T13:42:10.571 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:10 vm00 bash[20748]: debug 2026-03-10T13:42:10.452+0000 7f6f30c30d80 4 rocksdb: Options.compaction_readahead_size: 0 2026-03-10T13:42:10.571 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:10 vm00 bash[20748]: debug 2026-03-10T13:42:10.452+0000 7f6f30c30d80 4 rocksdb: Options.max_background_flushes: -1 2026-03-10T13:42:10.571 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:10 vm00 bash[20748]: debug 2026-03-10T13:42:10.452+0000 7f6f30c30d80 4 rocksdb: Compression algorithms supported: 2026-03-10T13:42:10.571 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:10 vm00 bash[20748]: debug 2026-03-10T13:42:10.452+0000 7f6f30c30d80 4 rocksdb: kZSTD supported: 0 2026-03-10T13:42:10.571 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:10 vm00 bash[20748]: debug 2026-03-10T13:42:10.452+0000 7f6f30c30d80 4 rocksdb: kXpressCompression supported: 0 2026-03-10T13:42:10.571 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:10 vm00 bash[20748]: debug 2026-03-10T13:42:10.452+0000 7f6f30c30d80 4 rocksdb: kBZip2Compression supported: 0 2026-03-10T13:42:10.571 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:10 vm00 bash[20748]: debug 2026-03-10T13:42:10.452+0000 7f6f30c30d80 4 rocksdb: kZSTDNotFinalCompression supported: 0 2026-03-10T13:42:10.571 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:10 vm00 bash[20748]: debug 2026-03-10T13:42:10.452+0000 7f6f30c30d80 4 rocksdb: kLZ4Compression supported: 1 2026-03-10T13:42:10.571 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:10 vm00 bash[20748]: debug 2026-03-10T13:42:10.452+0000 7f6f30c30d80 4 rocksdb: kZlibCompression supported: 1 2026-03-10T13:42:10.571 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:10 vm00 bash[20748]: debug 2026-03-10T13:42:10.452+0000 7f6f30c30d80 4 rocksdb: kLZ4HCCompression supported: 1 2026-03-10T13:42:10.571 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:10 vm00 bash[20748]: debug 2026-03-10T13:42:10.452+0000 7f6f30c30d80 4 rocksdb: kSnappyCompression supported: 1 2026-03-10T13:42:10.571 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:10 vm00 bash[20748]: debug 2026-03-10T13:42:10.452+0000 7f6f30c30d80 4 rocksdb: Fast CRC32 supported: Supported on x86 2026-03-10T13:42:10.571 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:10 vm00 bash[20748]: debug 2026-03-10T13:42:10.452+0000 7f6f30c30d80 4 rocksdb: DMutex implementation: pthread_mutex_t 2026-03-10T13:42:10.571 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:10 vm00 bash[20748]: debug 2026-03-10T13:42:10.452+0000 7f6f30c30d80 4 rocksdb: [db/version_set.cc:5527] Recovering from manifest file: /var/lib/ceph/mon/ceph-a/store.db/MANIFEST-000010 2026-03-10T13:42:10.571 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:10 vm00 bash[20748]: debug 2026-03-10T13:42:10.452+0000 7f6f30c30d80 4 rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]: 2026-03-10T13:42:10.571 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:10 vm00 bash[20748]: debug 2026-03-10T13:42:10.452+0000 7f6f30c30d80 4 rocksdb: Options.comparator: leveldb.BytewiseComparator 2026-03-10T13:42:10.571 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:10 vm00 bash[20748]: debug 2026-03-10T13:42:10.452+0000 7f6f30c30d80 4 rocksdb: Options.merge_operator: 2026-03-10T13:42:10.571 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:10 vm00 bash[20748]: debug 2026-03-10T13:42:10.452+0000 7f6f30c30d80 4 rocksdb: Options.compaction_filter: None 2026-03-10T13:42:10.571 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:10 vm00 bash[20748]: debug 2026-03-10T13:42:10.452+0000 7f6f30c30d80 4 rocksdb: Options.compaction_filter_factory: None 2026-03-10T13:42:10.571 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:10 vm00 bash[20748]: debug 2026-03-10T13:42:10.452+0000 7f6f30c30d80 4 rocksdb: Options.sst_partitioner_factory: None 2026-03-10T13:42:10.571 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:10 vm00 bash[20748]: debug 2026-03-10T13:42:10.452+0000 7f6f30c30d80 4 rocksdb: Options.memtable_factory: SkipListFactory 2026-03-10T13:42:10.571 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:10 vm00 bash[20748]: debug 2026-03-10T13:42:10.452+0000 7f6f30c30d80 4 rocksdb: Options.table_factory: BlockBasedTable 2026-03-10T13:42:10.571 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:10 vm00 bash[20748]: debug 2026-03-10T13:42:10.452+0000 7f6f30c30d80 4 rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55db06762640) 2026-03-10T13:42:10.571 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:10 vm00 bash[20748]: cache_index_and_filter_blocks: 1 2026-03-10T13:42:10.571 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:10 vm00 bash[20748]: cache_index_and_filter_blocks_with_high_priority: 0 2026-03-10T13:42:10.571 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:10 vm00 bash[20748]: pin_l0_filter_and_index_blocks_in_cache: 0 2026-03-10T13:42:10.571 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:10 vm00 bash[20748]: pin_top_level_index_and_filter: 1 2026-03-10T13:42:10.571 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:10 vm00 bash[20748]: index_type: 0 2026-03-10T13:42:10.571 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:10 vm00 bash[20748]: data_block_index_type: 0 2026-03-10T13:42:10.571 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:10 vm00 bash[20748]: index_shortening: 1 2026-03-10T13:42:10.571 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:10 vm00 bash[20748]: data_block_hash_table_util_ratio: 0.750000 2026-03-10T13:42:10.571 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:10 vm00 bash[20748]: checksum: 4 2026-03-10T13:42:10.571 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:10 vm00 bash[20748]: no_block_cache: 0 2026-03-10T13:42:10.572 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:10 vm00 bash[20748]: block_cache: 0x55db06789350 2026-03-10T13:42:10.572 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:10 vm00 bash[20748]: block_cache_name: BinnedLRUCache 2026-03-10T13:42:10.572 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:10 vm00 bash[20748]: block_cache_options: 2026-03-10T13:42:10.572 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:10 vm00 bash[20748]: capacity : 536870912 2026-03-10T13:42:10.572 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:10 vm00 bash[20748]: num_shard_bits : 4 2026-03-10T13:42:10.572 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:10 vm00 bash[20748]: strict_capacity_limit : 0 2026-03-10T13:42:10.572 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:10 vm00 bash[20748]: high_pri_pool_ratio: 0.000 2026-03-10T13:42:10.572 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:10 vm00 bash[20748]: block_cache_compressed: (nil) 2026-03-10T13:42:10.572 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:10 vm00 bash[20748]: persistent_cache: (nil) 2026-03-10T13:42:10.572 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:10 vm00 bash[20748]: block_size: 4096 2026-03-10T13:42:10.572 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:10 vm00 bash[20748]: block_size_deviation: 10 2026-03-10T13:42:10.572 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:10 vm00 bash[20748]: block_restart_interval: 16 2026-03-10T13:42:10.572 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:10 vm00 bash[20748]: index_block_restart_interval: 1 2026-03-10T13:42:10.572 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:10 vm00 bash[20748]: metadata_block_size: 4096 2026-03-10T13:42:10.572 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:10 vm00 bash[20748]: partition_filters: 0 2026-03-10T13:42:10.572 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:10 vm00 bash[20748]: use_delta_encoding: 1 2026-03-10T13:42:10.572 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:10 vm00 bash[20748]: filter_policy: bloomfilter 2026-03-10T13:42:10.572 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:10 vm00 bash[20748]: whole_key_filtering: 1 2026-03-10T13:42:10.572 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:10 vm00 bash[20748]: verify_compression: 0 2026-03-10T13:42:10.572 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:10 vm00 bash[20748]: read_amp_bytes_per_bit: 0 2026-03-10T13:42:10.572 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:10 vm00 bash[20748]: format_version: 5 2026-03-10T13:42:10.572 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:10 vm00 bash[20748]: enable_index_compression: 1 2026-03-10T13:42:10.572 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:10 vm00 bash[20748]: block_align: 0 2026-03-10T13:42:10.572 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:10 vm00 bash[20748]: max_auto_readahead_size: 262144 2026-03-10T13:42:10.572 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:10 vm00 bash[20748]: prepopulate_block_cache: 0 2026-03-10T13:42:10.572 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:10 vm00 bash[20748]: initial_auto_readahead_size: 8192 2026-03-10T13:42:10.572 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:10 vm00 bash[20748]: num_file_reads_for_auto_readahead: 2 2026-03-10T13:42:10.572 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:10 vm00 bash[20748]: debug 2026-03-10T13:42:10.452+0000 7f6f30c30d80 4 rocksdb: Options.write_buffer_size: 33554432 2026-03-10T13:42:10.572 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:10 vm00 bash[20748]: debug 2026-03-10T13:42:10.452+0000 7f6f30c30d80 4 rocksdb: Options.max_write_buffer_number: 2 2026-03-10T13:42:10.572 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:10 vm00 bash[20748]: debug 2026-03-10T13:42:10.452+0000 7f6f30c30d80 4 rocksdb: Options.compression: NoCompression 2026-03-10T13:42:10.572 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:10 vm00 bash[20748]: debug 2026-03-10T13:42:10.452+0000 7f6f30c30d80 4 rocksdb: Options.bottommost_compression: Disabled 2026-03-10T13:42:10.572 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:10 vm00 bash[20748]: debug 2026-03-10T13:42:10.452+0000 7f6f30c30d80 4 rocksdb: Options.prefix_extractor: nullptr 2026-03-10T13:42:10.572 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:10 vm00 bash[20748]: debug 2026-03-10T13:42:10.452+0000 7f6f30c30d80 4 rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr 2026-03-10T13:42:10.572 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:10 vm00 bash[20748]: debug 2026-03-10T13:42:10.452+0000 7f6f30c30d80 4 rocksdb: Options.num_levels: 7 2026-03-10T13:42:10.572 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:10 vm00 bash[20748]: debug 2026-03-10T13:42:10.452+0000 7f6f30c30d80 4 rocksdb: Options.min_write_buffer_number_to_merge: 1 2026-03-10T13:42:10.572 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:10 vm00 bash[20748]: debug 2026-03-10T13:42:10.452+0000 7f6f30c30d80 4 rocksdb: Options.max_write_buffer_number_to_maintain: 0 2026-03-10T13:42:10.572 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:10 vm00 bash[20748]: debug 2026-03-10T13:42:10.452+0000 7f6f30c30d80 4 rocksdb: Options.max_write_buffer_size_to_maintain: 0 2026-03-10T13:42:10.572 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:10 vm00 bash[20748]: debug 2026-03-10T13:42:10.452+0000 7f6f30c30d80 4 rocksdb: Options.bottommost_compression_opts.window_bits: -14 2026-03-10T13:42:10.572 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:10 vm00 bash[20748]: debug 2026-03-10T13:42:10.452+0000 7f6f30c30d80 4 rocksdb: Options.bottommost_compression_opts.level: 32767 2026-03-10T13:42:10.572 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:10 vm00 bash[20748]: debug 2026-03-10T13:42:10.452+0000 7f6f30c30d80 4 rocksdb: Options.bottommost_compression_opts.strategy: 0 2026-03-10T13:42:10.572 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:10 vm00 bash[20748]: debug 2026-03-10T13:42:10.452+0000 7f6f30c30d80 4 rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 2026-03-10T13:42:10.572 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:10 vm00 bash[20748]: debug 2026-03-10T13:42:10.452+0000 7f6f30c30d80 4 rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 2026-03-10T13:42:10.572 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:10 vm00 bash[20748]: debug 2026-03-10T13:42:10.452+0000 7f6f30c30d80 4 rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 2026-03-10T13:42:10.572 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:10 vm00 bash[20748]: debug 2026-03-10T13:42:10.452+0000 7f6f30c30d80 4 rocksdb: Options.bottommost_compression_opts.enabled: false 2026-03-10T13:42:10.572 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:10 vm00 bash[20748]: debug 2026-03-10T13:42:10.452+0000 7f6f30c30d80 4 rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 2026-03-10T13:42:10.572 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:10 vm00 bash[20748]: debug 2026-03-10T13:42:10.452+0000 7f6f30c30d80 4 rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true 2026-03-10T13:42:10.572 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:10 vm00 bash[20748]: debug 2026-03-10T13:42:10.452+0000 7f6f30c30d80 4 rocksdb: Options.compression_opts.window_bits: -14 2026-03-10T13:42:10.572 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:10 vm00 bash[20748]: debug 2026-03-10T13:42:10.452+0000 7f6f30c30d80 4 rocksdb: Options.compression_opts.level: 32767 2026-03-10T13:42:10.572 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:10 vm00 bash[20748]: debug 2026-03-10T13:42:10.452+0000 7f6f30c30d80 4 rocksdb: Options.compression_opts.strategy: 0 2026-03-10T13:42:10.572 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:10 vm00 bash[20748]: debug 2026-03-10T13:42:10.452+0000 7f6f30c30d80 4 rocksdb: Options.compression_opts.max_dict_bytes: 0 2026-03-10T13:42:10.572 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:10 vm00 bash[20748]: debug 2026-03-10T13:42:10.452+0000 7f6f30c30d80 4 rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 2026-03-10T13:42:10.572 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:10 vm00 bash[20748]: debug 2026-03-10T13:42:10.452+0000 7f6f30c30d80 4 rocksdb: Options.compression_opts.use_zstd_dict_trainer: true 2026-03-10T13:42:10.572 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:10 vm00 bash[20748]: debug 2026-03-10T13:42:10.452+0000 7f6f30c30d80 4 rocksdb: Options.compression_opts.parallel_threads: 1 2026-03-10T13:42:10.572 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:10 vm00 bash[20748]: debug 2026-03-10T13:42:10.452+0000 7f6f30c30d80 4 rocksdb: Options.compression_opts.enabled: false 2026-03-10T13:42:10.572 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:10 vm00 bash[20748]: debug 2026-03-10T13:42:10.452+0000 7f6f30c30d80 4 rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 2026-03-10T13:42:10.572 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:10 vm00 bash[20748]: debug 2026-03-10T13:42:10.452+0000 7f6f30c30d80 4 rocksdb: Options.level0_file_num_compaction_trigger: 4 2026-03-10T13:42:10.572 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:10 vm00 bash[20748]: debug 2026-03-10T13:42:10.452+0000 7f6f30c30d80 4 rocksdb: Options.level0_slowdown_writes_trigger: 20 2026-03-10T13:42:10.572 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:10 vm00 bash[20748]: debug 2026-03-10T13:42:10.452+0000 7f6f30c30d80 4 rocksdb: Options.level0_stop_writes_trigger: 36 2026-03-10T13:42:10.572 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:10 vm00 bash[20748]: debug 2026-03-10T13:42:10.452+0000 7f6f30c30d80 4 rocksdb: Options.target_file_size_base: 67108864 2026-03-10T13:42:10.572 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:10 vm00 bash[20748]: debug 2026-03-10T13:42:10.452+0000 7f6f30c30d80 4 rocksdb: Options.target_file_size_multiplier: 1 2026-03-10T13:42:10.572 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:10 vm00 bash[20748]: debug 2026-03-10T13:42:10.452+0000 7f6f30c30d80 4 rocksdb: Options.max_bytes_for_level_base: 268435456 2026-03-10T13:42:10.572 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:10 vm00 bash[20748]: debug 2026-03-10T13:42:10.452+0000 7f6f30c30d80 4 rocksdb: Options.level_compaction_dynamic_level_bytes: 1 2026-03-10T13:42:10.573 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:10 vm00 bash[20748]: debug 2026-03-10T13:42:10.452+0000 7f6f30c30d80 4 rocksdb: Options.max_bytes_for_level_multiplier: 10.000000 2026-03-10T13:42:10.573 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:10 vm00 bash[20748]: debug 2026-03-10T13:42:10.452+0000 7f6f30c30d80 4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 2026-03-10T13:42:10.573 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:10 vm00 bash[20748]: debug 2026-03-10T13:42:10.452+0000 7f6f30c30d80 4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 2026-03-10T13:42:10.573 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:10 vm00 bash[20748]: debug 2026-03-10T13:42:10.452+0000 7f6f30c30d80 4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 2026-03-10T13:42:10.573 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:10 vm00 bash[20748]: debug 2026-03-10T13:42:10.452+0000 7f6f30c30d80 4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 2026-03-10T13:42:10.573 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:10 vm00 bash[20748]: debug 2026-03-10T13:42:10.452+0000 7f6f30c30d80 4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 2026-03-10T13:42:10.573 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:10 vm00 bash[20748]: debug 2026-03-10T13:42:10.452+0000 7f6f30c30d80 4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 2026-03-10T13:42:10.573 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:10 vm00 bash[20748]: debug 2026-03-10T13:42:10.452+0000 7f6f30c30d80 4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 2026-03-10T13:42:10.573 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:10 vm00 bash[20748]: debug 2026-03-10T13:42:10.452+0000 7f6f30c30d80 4 rocksdb: Options.max_sequential_skip_in_iterations: 8 2026-03-10T13:42:10.573 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:10 vm00 bash[20748]: debug 2026-03-10T13:42:10.452+0000 7f6f30c30d80 4 rocksdb: Options.max_compaction_bytes: 1677721600 2026-03-10T13:42:10.573 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:10 vm00 bash[20748]: debug 2026-03-10T13:42:10.452+0000 7f6f30c30d80 4 rocksdb: Options.ignore_max_compaction_bytes_for_input: true 2026-03-10T13:42:10.573 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:10 vm00 bash[20748]: debug 2026-03-10T13:42:10.452+0000 7f6f30c30d80 4 rocksdb: Options.arena_block_size: 1048576 2026-03-10T13:42:10.573 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:10 vm00 bash[20748]: debug 2026-03-10T13:42:10.452+0000 7f6f30c30d80 4 rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 2026-03-10T13:42:10.573 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:10 vm00 bash[20748]: debug 2026-03-10T13:42:10.452+0000 7f6f30c30d80 4 rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 2026-03-10T13:42:10.573 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:10 vm00 bash[20748]: debug 2026-03-10T13:42:10.452+0000 7f6f30c30d80 4 rocksdb: Options.disable_auto_compactions: 0 2026-03-10T13:42:10.573 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:10 vm00 bash[20748]: debug 2026-03-10T13:42:10.452+0000 7f6f30c30d80 4 rocksdb: Options.compaction_style: kCompactionStyleLevel 2026-03-10T13:42:10.573 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:10 vm00 bash[20748]: debug 2026-03-10T13:42:10.452+0000 7f6f30c30d80 4 rocksdb: Options.compaction_pri: kMinOverlappingRatio 2026-03-10T13:42:10.573 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:10 vm00 bash[20748]: debug 2026-03-10T13:42:10.452+0000 7f6f30c30d80 4 rocksdb: Options.compaction_options_universal.size_ratio: 1 2026-03-10T13:42:10.573 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:10 vm00 bash[20748]: debug 2026-03-10T13:42:10.452+0000 7f6f30c30d80 4 rocksdb: Options.compaction_options_universal.min_merge_width: 2 2026-03-10T13:42:10.573 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:10 vm00 bash[20748]: debug 2026-03-10T13:42:10.452+0000 7f6f30c30d80 4 rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 2026-03-10T13:42:10.573 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:10 vm00 bash[20748]: debug 2026-03-10T13:42:10.452+0000 7f6f30c30d80 4 rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 2026-03-10T13:42:10.573 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:10 vm00 bash[20748]: debug 2026-03-10T13:42:10.452+0000 7f6f30c30d80 4 rocksdb: Options.compaction_options_universal.compression_size_percent: -1 2026-03-10T13:42:10.573 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:10 vm00 bash[20748]: debug 2026-03-10T13:42:10.452+0000 7f6f30c30d80 4 rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize 2026-03-10T13:42:10.573 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:10 vm00 bash[20748]: debug 2026-03-10T13:42:10.452+0000 7f6f30c30d80 4 rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 2026-03-10T13:42:10.573 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:10 vm00 bash[20748]: debug 2026-03-10T13:42:10.452+0000 7f6f30c30d80 4 rocksdb: Options.compaction_options_fifo.allow_compaction: 0 2026-03-10T13:42:10.573 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:10 vm00 bash[20748]: debug 2026-03-10T13:42:10.452+0000 7f6f30c30d80 4 rocksdb: Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0); 2026-03-10T13:42:10.573 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:10 vm00 bash[20748]: debug 2026-03-10T13:42:10.452+0000 7f6f30c30d80 4 rocksdb: Options.inplace_update_support: 0 2026-03-10T13:42:10.573 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:10 vm00 bash[20748]: debug 2026-03-10T13:42:10.452+0000 7f6f30c30d80 4 rocksdb: Options.inplace_update_num_locks: 10000 2026-03-10T13:42:10.573 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:10 vm00 bash[20748]: debug 2026-03-10T13:42:10.452+0000 7f6f30c30d80 4 rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 2026-03-10T13:42:10.573 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:10 vm00 bash[20748]: debug 2026-03-10T13:42:10.452+0000 7f6f30c30d80 4 rocksdb: Options.memtable_whole_key_filtering: 0 2026-03-10T13:42:10.573 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:10 vm00 bash[20748]: debug 2026-03-10T13:42:10.452+0000 7f6f30c30d80 4 rocksdb: Options.memtable_huge_page_size: 0 2026-03-10T13:42:10.573 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:10 vm00 bash[20748]: debug 2026-03-10T13:42:10.452+0000 7f6f30c30d80 4 rocksdb: Options.bloom_locality: 0 2026-03-10T13:42:10.573 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:10 vm00 bash[20748]: debug 2026-03-10T13:42:10.452+0000 7f6f30c30d80 4 rocksdb: Options.max_successive_merges: 0 2026-03-10T13:42:10.573 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:10 vm00 bash[20748]: debug 2026-03-10T13:42:10.452+0000 7f6f30c30d80 4 rocksdb: Options.optimize_filters_for_hits: 0 2026-03-10T13:42:10.573 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:10 vm00 bash[20748]: debug 2026-03-10T13:42:10.452+0000 7f6f30c30d80 4 rocksdb: Options.paranoid_file_checks: 0 2026-03-10T13:42:10.573 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:10 vm00 bash[20748]: debug 2026-03-10T13:42:10.452+0000 7f6f30c30d80 4 rocksdb: Options.force_consistency_checks: 1 2026-03-10T13:42:10.573 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:10 vm00 bash[20748]: debug 2026-03-10T13:42:10.452+0000 7f6f30c30d80 4 rocksdb: Options.report_bg_io_stats: 0 2026-03-10T13:42:10.573 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:10 vm00 bash[20748]: debug 2026-03-10T13:42:10.452+0000 7f6f30c30d80 4 rocksdb: Options.ttl: 2592000 2026-03-10T13:42:10.573 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:10 vm00 bash[20748]: debug 2026-03-10T13:42:10.452+0000 7f6f30c30d80 4 rocksdb: Options.periodic_compaction_seconds: 0 2026-03-10T13:42:10.573 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:10 vm00 bash[20748]: debug 2026-03-10T13:42:10.452+0000 7f6f30c30d80 4 rocksdb: Options.preclude_last_level_data_seconds: 0 2026-03-10T13:42:10.573 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:10 vm00 bash[20748]: debug 2026-03-10T13:42:10.452+0000 7f6f30c30d80 4 rocksdb: Options.preserve_internal_time_seconds: 0 2026-03-10T13:42:10.573 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:10 vm00 bash[20748]: debug 2026-03-10T13:42:10.452+0000 7f6f30c30d80 4 rocksdb: Options.enable_blob_files: false 2026-03-10T13:42:10.573 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:10 vm00 bash[20748]: debug 2026-03-10T13:42:10.452+0000 7f6f30c30d80 4 rocksdb: Options.min_blob_size: 0 2026-03-10T13:42:10.573 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:10 vm00 bash[20748]: debug 2026-03-10T13:42:10.452+0000 7f6f30c30d80 4 rocksdb: Options.blob_file_size: 268435456 2026-03-10T13:42:10.573 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:10 vm00 bash[20748]: debug 2026-03-10T13:42:10.452+0000 7f6f30c30d80 4 rocksdb: Options.blob_compression_type: NoCompression 2026-03-10T13:42:10.573 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:10 vm00 bash[20748]: debug 2026-03-10T13:42:10.452+0000 7f6f30c30d80 4 rocksdb: Options.enable_blob_garbage_collection: false 2026-03-10T13:42:10.573 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:10 vm00 bash[20748]: debug 2026-03-10T13:42:10.452+0000 7f6f30c30d80 4 rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 2026-03-10T13:42:10.573 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:10 vm00 bash[20748]: debug 2026-03-10T13:42:10.452+0000 7f6f30c30d80 4 rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 2026-03-10T13:42:10.573 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:10 vm00 bash[20748]: debug 2026-03-10T13:42:10.452+0000 7f6f30c30d80 4 rocksdb: Options.blob_compaction_readahead_size: 0 2026-03-10T13:42:10.573 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:10 vm00 bash[20748]: debug 2026-03-10T13:42:10.452+0000 7f6f30c30d80 4 rocksdb: Options.blob_file_starting_level: 0 2026-03-10T13:42:10.573 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:10 vm00 bash[20748]: debug 2026-03-10T13:42:10.452+0000 7f6f30c30d80 4 rocksdb: Options.experimental_mempurge_threshold: 0.000000 2026-03-10T13:42:10.573 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:10 vm00 bash[20748]: debug 2026-03-10T13:42:10.460+0000 7f6f30c30d80 4 rocksdb: [db/version_set.cc:5566] Recovered from manifest file:/var/lib/ceph/mon/ceph-a/store.db/MANIFEST-000010 succeeded,manifest_file_number is 10, next_file_number is 12, last_sequence is 5, log_number is 5,prev_log_number is 0,max_column_family is 0,min_log_number_to_keep is 5 2026-03-10T13:42:10.573 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:10 vm00 bash[20748]: debug 2026-03-10T13:42:10.460+0000 7f6f30c30d80 4 rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5 2026-03-10T13:42:10.573 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:10 vm00 bash[20748]: debug 2026-03-10T13:42:10.460+0000 7f6f30c30d80 4 rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: d653a3a9-80e8-489f-9e64-469e2d920817 2026-03-10T13:42:10.573 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:10 vm00 bash[20748]: debug 2026-03-10T13:42:10.460+0000 7f6f30c30d80 4 rocksdb: EVENT_LOG_v1 {"time_micros": 1773150130463703, "job": 1, "event": "recovery_started", "wal_files": [9]} 2026-03-10T13:42:10.573 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:10 vm00 bash[20748]: debug 2026-03-10T13:42:10.460+0000 7f6f30c30d80 4 rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #9 mode 2 2026-03-10T13:42:10.573 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:10 vm00 bash[20748]: debug 2026-03-10T13:42:10.460+0000 7f6f30c30d80 4 rocksdb: EVENT_LOG_v1 {"time_micros": 1773150130466252, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 13, "file_size": 83866, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 8, "largest_seqno": 245, "table_properties": {"data_size": 82032, "index_size": 223, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 581, "raw_key_size": 10134, "raw_average_key_size": 47, "raw_value_size": 76227, "raw_average_value_size": 359, "num_data_blocks": 10, "num_entries": 212, "num_filter_entries": 212, "num_deletions": 3, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1773150130, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "d653a3a9-80e8-489f-9e64-469e2d920817", "db_session_id": "5TB0XU03QP3YED0WVSTQ", "orig_file_number": 13, "seqno_to_time_mapping": "N/A"}} 2026-03-10T13:42:10.573 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:10 vm00 bash[20748]: debug 2026-03-10T13:42:10.460+0000 7f6f30c30d80 4 rocksdb: EVENT_LOG_v1 {"time_micros": 1773150130466416, "job": 1, "event": "recovery_finished"} 2026-03-10T13:42:10.573 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:10 vm00 bash[20748]: debug 2026-03-10T13:42:10.460+0000 7f6f30c30d80 4 rocksdb: [db/version_set.cc:5047] Creating manifest 15 2026-03-10T13:42:10.573 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:10 vm00 bash[20748]: debug 2026-03-10T13:42:10.468+0000 7f6f30c30d80 4 rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-a/store.db/000009.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 2026-03-10T13:42:10.573 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:10 vm00 bash[20748]: debug 2026-03-10T13:42:10.468+0000 7f6f30c30d80 4 rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x55db0678ae00 2026-03-10T13:42:10.574 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:10 vm00 bash[20748]: debug 2026-03-10T13:42:10.468+0000 7f6f30c30d80 4 rocksdb: DB pointer 0x55db068a0000 2026-03-10T13:42:10.574 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:10 vm00 bash[20748]: debug 2026-03-10T13:42:10.468+0000 7f6f269fa640 4 rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS ------- 2026-03-10T13:42:10.574 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:10 vm00 bash[20748]: debug 2026-03-10T13:42:10.468+0000 7f6f269fa640 4 rocksdb: [db/db_impl/db_impl.cc:1111] 2026-03-10T13:42:10.574 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:10 vm00 bash[20748]: ** DB Stats ** 2026-03-10T13:42:10.574 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:10 vm00 bash[20748]: Uptime(secs): 0.0 total, 0.0 interval 2026-03-10T13:42:10.574 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:10 vm00 bash[20748]: Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s 2026-03-10T13:42:10.574 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:10 vm00 bash[20748]: Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s 2026-03-10T13:42:10.574 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:10 vm00 bash[20748]: Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent 2026-03-10T13:42:10.574 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:10 vm00 bash[20748]: Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s 2026-03-10T13:42:10.574 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:10 vm00 bash[20748]: Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s 2026-03-10T13:42:10.574 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:10 vm00 bash[20748]: Interval stall: 00:00:0.000 H:M:S, 0.0 percent 2026-03-10T13:42:10.574 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:10 vm00 bash[20748]: ** Compaction Stats [default] ** 2026-03-10T13:42:10.574 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:10 vm00 bash[20748]: Level Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB) 2026-03-10T13:42:10.574 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:10 vm00 bash[20748]: ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ 2026-03-10T13:42:10.574 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:10 vm00 bash[20748]: L0 2/0 83.76 KB 0.5 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 36.7 0.00 0.00 1 0.002 0 0 0.0 0.0 2026-03-10T13:42:10.574 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:10 vm00 bash[20748]: Sum 2/0 83.76 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 36.7 0.00 0.00 1 0.002 0 0 0.0 0.0 2026-03-10T13:42:10.574 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:10 vm00 bash[20748]: Int 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 36.7 0.00 0.00 1 0.002 0 0 0.0 0.0 2026-03-10T13:42:10.574 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:10 vm00 bash[20748]: ** Compaction Stats [default] ** 2026-03-10T13:42:10.574 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:10 vm00 bash[20748]: Priority Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB) 2026-03-10T13:42:10.574 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:10 vm00 bash[20748]: --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- 2026-03-10T13:42:10.574 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:10 vm00 bash[20748]: User 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 36.7 0.00 0.00 1 0.002 0 0 0.0 0.0 2026-03-10T13:42:10.574 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:10 vm00 bash[20748]: Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0 2026-03-10T13:42:10.574 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:10 vm00 bash[20748]: Uptime(secs): 0.0 total, 0.0 interval 2026-03-10T13:42:10.574 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:10 vm00 bash[20748]: Flush(GB): cumulative 0.000, interval 0.000 2026-03-10T13:42:10.574 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:10 vm00 bash[20748]: AddFile(GB): cumulative 0.000, interval 0.000 2026-03-10T13:42:10.574 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:10 vm00 bash[20748]: AddFile(Total Files): cumulative 0, interval 0 2026-03-10T13:42:10.574 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:10 vm00 bash[20748]: AddFile(L0 Files): cumulative 0, interval 0 2026-03-10T13:42:10.574 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:10 vm00 bash[20748]: AddFile(Keys): cumulative 0, interval 0 2026-03-10T13:42:10.574 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:10 vm00 bash[20748]: Cumulative compaction: 0.00 GB write, 5.64 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds 2026-03-10T13:42:10.574 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:10 vm00 bash[20748]: Interval compaction: 0.00 GB write, 5.64 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds 2026-03-10T13:42:10.574 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:10 vm00 bash[20748]: Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count 2026-03-10T13:42:10.574 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:10 vm00 bash[20748]: Block cache BinnedLRUCache@0x55db06789350#8 capacity: 512.00 MB usage: 1.17 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 0 last_secs: 9e-06 secs_since: 0 2026-03-10T13:42:10.574 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:10 vm00 bash[20748]: Block cache entry stats(count,size,portion): FilterBlock(2,0.77 KB,0.000146031%) IndexBlock(2,0.41 KB,7.7486e-05%) Misc(1,0.00 KB,0%) 2026-03-10T13:42:10.574 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:10 vm00 bash[20748]: ** File Read Latency Histogram By Level [default] ** 2026-03-10T13:42:10.574 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:10 vm00 bash[20748]: debug 2026-03-10T13:42:10.468+0000 7f6f30c30d80 0 starting mon.a rank 0 at public addrs [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] at bind addrs [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] mon_data /var/lib/ceph/mon/ceph-a fsid c9620084-1c86-11f1-bcc5-e3fb709eab0a 2026-03-10T13:42:10.574 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:10 vm00 bash[20748]: debug 2026-03-10T13:42:10.468+0000 7f6f30c30d80 1 mon.a@-1(???) e1 preinit fsid c9620084-1c86-11f1-bcc5-e3fb709eab0a 2026-03-10T13:42:10.574 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:10 vm00 bash[20748]: debug 2026-03-10T13:42:10.468+0000 7f6f30c30d80 5 mon.a@-1(???).mds e0 Unable to load 'last_metadata' 2026-03-10T13:42:10.574 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:10 vm00 bash[20748]: debug 2026-03-10T13:42:10.468+0000 7f6f30c30d80 5 mon.a@-1(???).mds e0 Unable to load 'last_metadata' 2026-03-10T13:42:10.574 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:10 vm00 bash[20748]: debug 2026-03-10T13:42:10.468+0000 7f6f30c30d80 0 mon.a@-1(???).mds e1 new map 2026-03-10T13:42:10.574 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:10 vm00 bash[20748]: debug 2026-03-10T13:42:10.468+0000 7f6f30c30d80 0 mon.a@-1(???).mds e1 print_map 2026-03-10T13:42:10.574 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:10 vm00 bash[20748]: e1 2026-03-10T13:42:10.574 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:10 vm00 bash[20748]: btime 2026-03-10T13:42:08:641766+0000 2026-03-10T13:42:10.574 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:10 vm00 bash[20748]: enable_multiple, ever_enabled_multiple: 1,1 2026-03-10T13:42:10.574 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:10 vm00 bash[20748]: default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes} 2026-03-10T13:42:10.574 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:10 vm00 bash[20748]: legacy client fscid: -1 2026-03-10T13:42:10.574 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:10 vm00 bash[20748]: 2026-03-10T13:42:10.574 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:10 vm00 bash[20748]: No filesystems configured 2026-03-10T13:42:10.574 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:10 vm00 bash[20748]: debug 2026-03-10T13:42:10.468+0000 7f6f30c30d80 0 mon.a@-1(???).osd e1 crush map has features 3314932999778484224, adjusting msgr requires 2026-03-10T13:42:10.574 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:10 vm00 bash[20748]: debug 2026-03-10T13:42:10.468+0000 7f6f30c30d80 0 mon.a@-1(???).osd e1 crush map has features 288514050185494528, adjusting msgr requires 2026-03-10T13:42:10.574 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:10 vm00 bash[20748]: debug 2026-03-10T13:42:10.468+0000 7f6f30c30d80 0 mon.a@-1(???).osd e1 crush map has features 288514050185494528, adjusting msgr requires 2026-03-10T13:42:10.574 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:10 vm00 bash[20748]: debug 2026-03-10T13:42:10.468+0000 7f6f30c30d80 0 mon.a@-1(???).osd e1 crush map has features 288514050185494528, adjusting msgr requires 2026-03-10T13:42:10.574 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:10 vm00 bash[20748]: debug 2026-03-10T13:42:10.468+0000 7f6f30c30d80 1 mon.a@-1(???).paxosservice(auth 1..2) refresh upgraded, format 0 -> 3 2026-03-10T13:42:10.574 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:10 vm00 bash[20748]: debug 2026-03-10T13:42:10.468+0000 7f6f30c30d80 4 mon.a@-1(???).mgr e0 loading version 1 2026-03-10T13:42:10.574 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:10 vm00 bash[20748]: debug 2026-03-10T13:42:10.468+0000 7f6f30c30d80 4 mon.a@-1(???).mgr e1 active server: (0) 2026-03-10T13:42:10.574 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:10 vm00 bash[20748]: debug 2026-03-10T13:42:10.468+0000 7f6f30c30d80 4 mon.a@-1(???).mgr e1 mkfs or daemon transitioned to available, loading commands 2026-03-10T13:42:10.574 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:10 vm00 bash[20748]: cluster 2026-03-10T13:42:10.479804+0000 mon.a (mon.0) 1 : cluster [INF] mon.a is new leader, mons a in quorum (ranks 0) 2026-03-10T13:42:10.574 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:10 vm00 bash[20748]: cluster 2026-03-10T13:42:10.479804+0000 mon.a (mon.0) 1 : cluster [INF] mon.a is new leader, mons a in quorum (ranks 0) 2026-03-10T13:42:10.574 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:10 vm00 bash[20748]: cluster 2026-03-10T13:42:10.479856+0000 mon.a (mon.0) 2 : cluster [DBG] monmap epoch 1 2026-03-10T13:42:10.574 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:10 vm00 bash[20748]: cluster 2026-03-10T13:42:10.479856+0000 mon.a (mon.0) 2 : cluster [DBG] monmap epoch 1 2026-03-10T13:42:10.574 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:10 vm00 bash[20748]: cluster 2026-03-10T13:42:10.479861+0000 mon.a (mon.0) 3 : cluster [DBG] fsid c9620084-1c86-11f1-bcc5-e3fb709eab0a 2026-03-10T13:42:10.575 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:10 vm00 bash[20748]: cluster 2026-03-10T13:42:10.479861+0000 mon.a (mon.0) 3 : cluster [DBG] fsid c9620084-1c86-11f1-bcc5-e3fb709eab0a 2026-03-10T13:42:10.575 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:10 vm00 bash[20748]: cluster 2026-03-10T13:42:10.479866+0000 mon.a (mon.0) 4 : cluster [DBG] last_changed 2026-03-10T13:42:07.014183+0000 2026-03-10T13:42:10.575 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:10 vm00 bash[20748]: cluster 2026-03-10T13:42:10.479866+0000 mon.a (mon.0) 4 : cluster [DBG] last_changed 2026-03-10T13:42:07.014183+0000 2026-03-10T13:42:10.575 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:10 vm00 bash[20748]: cluster 2026-03-10T13:42:10.479875+0000 mon.a (mon.0) 5 : cluster [DBG] created 2026-03-10T13:42:07.014183+0000 2026-03-10T13:42:10.575 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:10 vm00 bash[20748]: cluster 2026-03-10T13:42:10.479875+0000 mon.a (mon.0) 5 : cluster [DBG] created 2026-03-10T13:42:07.014183+0000 2026-03-10T13:42:10.575 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:10 vm00 bash[20748]: cluster 2026-03-10T13:42:10.479880+0000 mon.a (mon.0) 6 : cluster [DBG] min_mon_release 19 (squid) 2026-03-10T13:42:10.575 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:10 vm00 bash[20748]: cluster 2026-03-10T13:42:10.479880+0000 mon.a (mon.0) 6 : cluster [DBG] min_mon_release 19 (squid) 2026-03-10T13:42:10.575 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:10 vm00 bash[20748]: cluster 2026-03-10T13:42:10.479885+0000 mon.a (mon.0) 7 : cluster [DBG] election_strategy: 1 2026-03-10T13:42:10.575 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:10 vm00 bash[20748]: cluster 2026-03-10T13:42:10.479885+0000 mon.a (mon.0) 7 : cluster [DBG] election_strategy: 1 2026-03-10T13:42:10.575 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:10 vm00 bash[20748]: cluster 2026-03-10T13:42:10.479890+0000 mon.a (mon.0) 8 : cluster [DBG] 0: [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] mon.a 2026-03-10T13:42:10.575 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:10 vm00 bash[20748]: cluster 2026-03-10T13:42:10.479890+0000 mon.a (mon.0) 8 : cluster [DBG] 0: [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] mon.a 2026-03-10T13:42:10.575 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:10 vm00 bash[20748]: cluster 2026-03-10T13:42:10.480203+0000 mon.a (mon.0) 9 : cluster [DBG] fsmap 2026-03-10T13:42:10.575 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:10 vm00 bash[20748]: cluster 2026-03-10T13:42:10.480203+0000 mon.a (mon.0) 9 : cluster [DBG] fsmap 2026-03-10T13:42:10.575 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:10 vm00 bash[20748]: cluster 2026-03-10T13:42:10.480219+0000 mon.a (mon.0) 10 : cluster [DBG] osdmap e1: 0 total, 0 up, 0 in 2026-03-10T13:42:10.575 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:10 vm00 bash[20748]: cluster 2026-03-10T13:42:10.480219+0000 mon.a (mon.0) 10 : cluster [DBG] osdmap e1: 0 total, 0 up, 0 in 2026-03-10T13:42:10.575 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:10 vm00 bash[20748]: cluster 2026-03-10T13:42:10.480948+0000 mon.a (mon.0) 11 : cluster [DBG] mgrmap e1: no daemons active 2026-03-10T13:42:10.575 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:10 vm00 bash[20748]: cluster 2026-03-10T13:42:10.480948+0000 mon.a (mon.0) 11 : cluster [DBG] mgrmap e1: no daemons active 2026-03-10T13:42:10.597 INFO:teuthology.orchestra.run.vm00.stdout:Wrote config to /etc/ceph/ceph.conf 2026-03-10T13:42:10.599 INFO:teuthology.orchestra.run.vm00.stdout:Wrote keyring to /etc/ceph/ceph.client.admin.keyring 2026-03-10T13:42:10.599 INFO:teuthology.orchestra.run.vm00.stdout:Creating mgr... 2026-03-10T13:42:10.599 INFO:teuthology.orchestra.run.vm00.stdout:Verifying port 0.0.0.0:9283 ... 2026-03-10T13:42:10.599 INFO:teuthology.orchestra.run.vm00.stdout:Verifying port 0.0.0.0:8765 ... 2026-03-10T13:42:10.761 INFO:teuthology.orchestra.run.vm00.stdout:Non-zero exit code 1 from systemctl reset-failed ceph-c9620084-1c86-11f1-bcc5-e3fb709eab0a@mgr.a 2026-03-10T13:42:10.761 INFO:teuthology.orchestra.run.vm00.stdout:systemctl: stderr Failed to reset failed state of unit ceph-c9620084-1c86-11f1-bcc5-e3fb709eab0a@mgr.a.service: Unit ceph-c9620084-1c86-11f1-bcc5-e3fb709eab0a@mgr.a.service not loaded. 2026-03-10T13:42:10.877 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:10 vm00 systemd[1]: /etc/systemd/system/ceph-c9620084-1c86-11f1-bcc5-e3fb709eab0a@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T13:42:10.926 INFO:teuthology.orchestra.run.vm00.stdout:systemctl: stderr Created symlink /etc/systemd/system/ceph-c9620084-1c86-11f1-bcc5-e3fb709eab0a.target.wants/ceph-c9620084-1c86-11f1-bcc5-e3fb709eab0a@mgr.a.service → /etc/systemd/system/ceph-c9620084-1c86-11f1-bcc5-e3fb709eab0a@.service. 2026-03-10T13:42:10.934 INFO:teuthology.orchestra.run.vm00.stdout:firewalld does not appear to be present 2026-03-10T13:42:10.934 INFO:teuthology.orchestra.run.vm00.stdout:Not possible to enable service . firewalld.service is not available 2026-03-10T13:42:10.934 INFO:teuthology.orchestra.run.vm00.stdout:firewalld does not appear to be present 2026-03-10T13:42:10.934 INFO:teuthology.orchestra.run.vm00.stdout:Not possible to open ports <[9283, 8765]>. firewalld.service is not available 2026-03-10T13:42:10.934 INFO:teuthology.orchestra.run.vm00.stdout:Waiting for mgr to start... 2026-03-10T13:42:10.934 INFO:teuthology.orchestra.run.vm00.stdout:Waiting for mgr... 2026-03-10T13:42:11.146 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:10 vm00 systemd[1]: /etc/systemd/system/ceph-c9620084-1c86-11f1-bcc5-e3fb709eab0a@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T13:42:11.146 INFO:journalctl@ceph.mgr.a.vm00.stdout:Mar 10 13:42:11 vm00 bash[21015]: debug 2026-03-10T13:42:11.124+0000 7f0019f70140 -1 mgr[py] Module status has missing NOTIFY_TYPES member 2026-03-10T13:42:11.174 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout 2026-03-10T13:42:11.174 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout { 2026-03-10T13:42:11.174 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "fsid": "c9620084-1c86-11f1-bcc5-e3fb709eab0a", 2026-03-10T13:42:11.174 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "health": { 2026-03-10T13:42:11.174 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "status": "HEALTH_OK", 2026-03-10T13:42:11.175 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "checks": {}, 2026-03-10T13:42:11.175 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "mutes": [] 2026-03-10T13:42:11.175 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout }, 2026-03-10T13:42:11.175 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "election_epoch": 5, 2026-03-10T13:42:11.175 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "quorum": [ 2026-03-10T13:42:11.175 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout 0 2026-03-10T13:42:11.175 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout ], 2026-03-10T13:42:11.175 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "quorum_names": [ 2026-03-10T13:42:11.175 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "a" 2026-03-10T13:42:11.175 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout ], 2026-03-10T13:42:11.175 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "quorum_age": 0, 2026-03-10T13:42:11.175 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "monmap": { 2026-03-10T13:42:11.175 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "epoch": 1, 2026-03-10T13:42:11.175 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "min_mon_release_name": "squid", 2026-03-10T13:42:11.175 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "num_mons": 1 2026-03-10T13:42:11.175 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout }, 2026-03-10T13:42:11.175 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "osdmap": { 2026-03-10T13:42:11.175 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "epoch": 1, 2026-03-10T13:42:11.175 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "num_osds": 0, 2026-03-10T13:42:11.175 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "num_up_osds": 0, 2026-03-10T13:42:11.175 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "osd_up_since": 0, 2026-03-10T13:42:11.175 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "num_in_osds": 0, 2026-03-10T13:42:11.175 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "osd_in_since": 0, 2026-03-10T13:42:11.175 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "num_remapped_pgs": 0 2026-03-10T13:42:11.175 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout }, 2026-03-10T13:42:11.175 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "pgmap": { 2026-03-10T13:42:11.175 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "pgs_by_state": [], 2026-03-10T13:42:11.175 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "num_pgs": 0, 2026-03-10T13:42:11.175 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "num_pools": 0, 2026-03-10T13:42:11.175 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "num_objects": 0, 2026-03-10T13:42:11.175 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "data_bytes": 0, 2026-03-10T13:42:11.175 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "bytes_used": 0, 2026-03-10T13:42:11.175 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "bytes_avail": 0, 2026-03-10T13:42:11.175 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "bytes_total": 0 2026-03-10T13:42:11.175 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout }, 2026-03-10T13:42:11.175 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "fsmap": { 2026-03-10T13:42:11.175 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "epoch": 1, 2026-03-10T13:42:11.175 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "btime": "2026-03-10T13:42:08:641766+0000", 2026-03-10T13:42:11.175 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "by_rank": [], 2026-03-10T13:42:11.175 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "up:standby": 0 2026-03-10T13:42:11.175 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout }, 2026-03-10T13:42:11.175 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "mgrmap": { 2026-03-10T13:42:11.175 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "available": false, 2026-03-10T13:42:11.175 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "num_standbys": 0, 2026-03-10T13:42:11.175 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "modules": [ 2026-03-10T13:42:11.175 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "iostat", 2026-03-10T13:42:11.176 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "nfs", 2026-03-10T13:42:11.176 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "restful" 2026-03-10T13:42:11.176 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout ], 2026-03-10T13:42:11.176 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "services": {} 2026-03-10T13:42:11.176 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout }, 2026-03-10T13:42:11.176 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "servicemap": { 2026-03-10T13:42:11.176 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "epoch": 1, 2026-03-10T13:42:11.176 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "modified": "2026-03-10T13:42:08.642571+0000", 2026-03-10T13:42:11.176 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "services": {} 2026-03-10T13:42:11.176 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout }, 2026-03-10T13:42:11.176 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "progress_events": {} 2026-03-10T13:42:11.176 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout } 2026-03-10T13:42:11.176 INFO:teuthology.orchestra.run.vm00.stdout:mgr not available, waiting (1/15)... 2026-03-10T13:42:11.466 INFO:journalctl@ceph.mgr.a.vm00.stdout:Mar 10 13:42:11 vm00 bash[21015]: debug 2026-03-10T13:42:11.160+0000 7f0019f70140 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member 2026-03-10T13:42:11.466 INFO:journalctl@ceph.mgr.a.vm00.stdout:Mar 10 13:42:11 vm00 bash[21015]: debug 2026-03-10T13:42:11.276+0000 7f0019f70140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member 2026-03-10T13:42:11.966 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:11 vm00 bash[20748]: audit 2026-03-10T13:42:10.561404+0000 mon.a (mon.0) 12 : audit [INF] from='client.? 192.168.123.100:0/2442875451' entity='client.admin' 2026-03-10T13:42:11.966 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:11 vm00 bash[20748]: audit 2026-03-10T13:42:10.561404+0000 mon.a (mon.0) 12 : audit [INF] from='client.? 192.168.123.100:0/2442875451' entity='client.admin' 2026-03-10T13:42:11.966 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:11 vm00 bash[20748]: audit 2026-03-10T13:42:11.136181+0000 mon.a (mon.0) 13 : audit [DBG] from='client.? 192.168.123.100:0/3276766251' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch 2026-03-10T13:42:11.966 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:11 vm00 bash[20748]: audit 2026-03-10T13:42:11.136181+0000 mon.a (mon.0) 13 : audit [DBG] from='client.? 192.168.123.100:0/3276766251' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch 2026-03-10T13:42:11.966 INFO:journalctl@ceph.mgr.a.vm00.stdout:Mar 10 13:42:11 vm00 bash[21015]: debug 2026-03-10T13:42:11.564+0000 7f0019f70140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member 2026-03-10T13:42:12.347 INFO:journalctl@ceph.mgr.a.vm00.stdout:Mar 10 13:42:12 vm00 bash[21015]: debug 2026-03-10T13:42:12.008+0000 7f0019f70140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member 2026-03-10T13:42:12.347 INFO:journalctl@ceph.mgr.a.vm00.stdout:Mar 10 13:42:12 vm00 bash[21015]: debug 2026-03-10T13:42:12.092+0000 7f0019f70140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member 2026-03-10T13:42:12.347 INFO:journalctl@ceph.mgr.a.vm00.stdout:Mar 10 13:42:12 vm00 bash[21015]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode. 2026-03-10T13:42:12.347 INFO:journalctl@ceph.mgr.a.vm00.stdout:Mar 10 13:42:12 vm00 bash[21015]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve. 2026-03-10T13:42:12.347 INFO:journalctl@ceph.mgr.a.vm00.stdout:Mar 10 13:42:12 vm00 bash[21015]: from numpy import show_config as show_numpy_config 2026-03-10T13:42:12.347 INFO:journalctl@ceph.mgr.a.vm00.stdout:Mar 10 13:42:12 vm00 bash[21015]: debug 2026-03-10T13:42:12.212+0000 7f0019f70140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member 2026-03-10T13:42:12.716 INFO:journalctl@ceph.mgr.a.vm00.stdout:Mar 10 13:42:12 vm00 bash[21015]: debug 2026-03-10T13:42:12.344+0000 7f0019f70140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member 2026-03-10T13:42:12.716 INFO:journalctl@ceph.mgr.a.vm00.stdout:Mar 10 13:42:12 vm00 bash[21015]: debug 2026-03-10T13:42:12.380+0000 7f0019f70140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member 2026-03-10T13:42:12.716 INFO:journalctl@ceph.mgr.a.vm00.stdout:Mar 10 13:42:12 vm00 bash[21015]: debug 2026-03-10T13:42:12.420+0000 7f0019f70140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member 2026-03-10T13:42:12.716 INFO:journalctl@ceph.mgr.a.vm00.stdout:Mar 10 13:42:12 vm00 bash[21015]: debug 2026-03-10T13:42:12.464+0000 7f0019f70140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member 2026-03-10T13:42:12.716 INFO:journalctl@ceph.mgr.a.vm00.stdout:Mar 10 13:42:12 vm00 bash[21015]: debug 2026-03-10T13:42:12.516+0000 7f0019f70140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member 2026-03-10T13:42:13.216 INFO:journalctl@ceph.mgr.a.vm00.stdout:Mar 10 13:42:12 vm00 bash[21015]: debug 2026-03-10T13:42:12.952+0000 7f0019f70140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member 2026-03-10T13:42:13.216 INFO:journalctl@ceph.mgr.a.vm00.stdout:Mar 10 13:42:12 vm00 bash[21015]: debug 2026-03-10T13:42:12.988+0000 7f0019f70140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member 2026-03-10T13:42:13.216 INFO:journalctl@ceph.mgr.a.vm00.stdout:Mar 10 13:42:13 vm00 bash[21015]: debug 2026-03-10T13:42:13.024+0000 7f0019f70140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member 2026-03-10T13:42:13.216 INFO:journalctl@ceph.mgr.a.vm00.stdout:Mar 10 13:42:13 vm00 bash[21015]: debug 2026-03-10T13:42:13.168+0000 7f0019f70140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member 2026-03-10T13:42:13.216 INFO:journalctl@ceph.mgr.a.vm00.stdout:Mar 10 13:42:13 vm00 bash[21015]: debug 2026-03-10T13:42:13.212+0000 7f0019f70140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member 2026-03-10T13:42:13.443 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout 2026-03-10T13:42:13.443 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout { 2026-03-10T13:42:13.443 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "fsid": "c9620084-1c86-11f1-bcc5-e3fb709eab0a", 2026-03-10T13:42:13.443 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "health": { 2026-03-10T13:42:13.443 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "status": "HEALTH_OK", 2026-03-10T13:42:13.443 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "checks": {}, 2026-03-10T13:42:13.443 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "mutes": [] 2026-03-10T13:42:13.443 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout }, 2026-03-10T13:42:13.443 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "election_epoch": 5, 2026-03-10T13:42:13.443 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "quorum": [ 2026-03-10T13:42:13.444 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout 0 2026-03-10T13:42:13.444 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout ], 2026-03-10T13:42:13.444 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "quorum_names": [ 2026-03-10T13:42:13.444 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "a" 2026-03-10T13:42:13.444 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout ], 2026-03-10T13:42:13.444 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "quorum_age": 2, 2026-03-10T13:42:13.444 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "monmap": { 2026-03-10T13:42:13.444 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "epoch": 1, 2026-03-10T13:42:13.444 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "min_mon_release_name": "squid", 2026-03-10T13:42:13.444 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "num_mons": 1 2026-03-10T13:42:13.444 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout }, 2026-03-10T13:42:13.444 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "osdmap": { 2026-03-10T13:42:13.444 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "epoch": 1, 2026-03-10T13:42:13.444 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "num_osds": 0, 2026-03-10T13:42:13.444 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "num_up_osds": 0, 2026-03-10T13:42:13.444 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "osd_up_since": 0, 2026-03-10T13:42:13.444 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "num_in_osds": 0, 2026-03-10T13:42:13.444 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "osd_in_since": 0, 2026-03-10T13:42:13.444 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "num_remapped_pgs": 0 2026-03-10T13:42:13.444 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout }, 2026-03-10T13:42:13.444 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "pgmap": { 2026-03-10T13:42:13.444 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "pgs_by_state": [], 2026-03-10T13:42:13.444 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "num_pgs": 0, 2026-03-10T13:42:13.444 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "num_pools": 0, 2026-03-10T13:42:13.444 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "num_objects": 0, 2026-03-10T13:42:13.444 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "data_bytes": 0, 2026-03-10T13:42:13.444 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "bytes_used": 0, 2026-03-10T13:42:13.444 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "bytes_avail": 0, 2026-03-10T13:42:13.444 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "bytes_total": 0 2026-03-10T13:42:13.444 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout }, 2026-03-10T13:42:13.444 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "fsmap": { 2026-03-10T13:42:13.444 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "epoch": 1, 2026-03-10T13:42:13.444 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "btime": "2026-03-10T13:42:08:641766+0000", 2026-03-10T13:42:13.444 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "by_rank": [], 2026-03-10T13:42:13.444 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "up:standby": 0 2026-03-10T13:42:13.444 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout }, 2026-03-10T13:42:13.444 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "mgrmap": { 2026-03-10T13:42:13.444 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "available": false, 2026-03-10T13:42:13.444 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "num_standbys": 0, 2026-03-10T13:42:13.444 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "modules": [ 2026-03-10T13:42:13.444 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "iostat", 2026-03-10T13:42:13.444 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "nfs", 2026-03-10T13:42:13.444 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "restful" 2026-03-10T13:42:13.444 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout ], 2026-03-10T13:42:13.444 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "services": {} 2026-03-10T13:42:13.444 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout }, 2026-03-10T13:42:13.444 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "servicemap": { 2026-03-10T13:42:13.444 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "epoch": 1, 2026-03-10T13:42:13.444 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "modified": "2026-03-10T13:42:08.642571+0000", 2026-03-10T13:42:13.444 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "services": {} 2026-03-10T13:42:13.444 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout }, 2026-03-10T13:42:13.444 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "progress_events": {} 2026-03-10T13:42:13.444 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout } 2026-03-10T13:42:13.444 INFO:teuthology.orchestra.run.vm00.stdout:mgr not available, waiting (2/15)... 2026-03-10T13:42:13.466 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:13 vm00 bash[20748]: audit 2026-03-10T13:42:13.400250+0000 mon.a (mon.0) 14 : audit [DBG] from='client.? 192.168.123.100:0/467314035' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch 2026-03-10T13:42:13.466 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:13 vm00 bash[20748]: audit 2026-03-10T13:42:13.400250+0000 mon.a (mon.0) 14 : audit [DBG] from='client.? 192.168.123.100:0/467314035' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch 2026-03-10T13:42:13.466 INFO:journalctl@ceph.mgr.a.vm00.stdout:Mar 10 13:42:13 vm00 bash[21015]: debug 2026-03-10T13:42:13.256+0000 7f0019f70140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member 2026-03-10T13:42:13.466 INFO:journalctl@ceph.mgr.a.vm00.stdout:Mar 10 13:42:13 vm00 bash[21015]: debug 2026-03-10T13:42:13.384+0000 7f0019f70140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member 2026-03-10T13:42:13.818 INFO:journalctl@ceph.mgr.a.vm00.stdout:Mar 10 13:42:13 vm00 bash[21015]: debug 2026-03-10T13:42:13.560+0000 7f0019f70140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member 2026-03-10T13:42:13.818 INFO:journalctl@ceph.mgr.a.vm00.stdout:Mar 10 13:42:13 vm00 bash[21015]: debug 2026-03-10T13:42:13.736+0000 7f0019f70140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member 2026-03-10T13:42:13.818 INFO:journalctl@ceph.mgr.a.vm00.stdout:Mar 10 13:42:13 vm00 bash[21015]: debug 2026-03-10T13:42:13.772+0000 7f0019f70140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member 2026-03-10T13:42:14.189 INFO:journalctl@ceph.mgr.a.vm00.stdout:Mar 10 13:42:13 vm00 bash[21015]: debug 2026-03-10T13:42:13.816+0000 7f0019f70140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member 2026-03-10T13:42:14.189 INFO:journalctl@ceph.mgr.a.vm00.stdout:Mar 10 13:42:13 vm00 bash[21015]: debug 2026-03-10T13:42:13.968+0000 7f0019f70140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member 2026-03-10T13:42:14.452 INFO:journalctl@ceph.mgr.a.vm00.stdout:Mar 10 13:42:14 vm00 bash[21015]: debug 2026-03-10T13:42:14.196+0000 7f0019f70140 -1 mgr[py] Module snap_schedule has missing NOTIFY_TYPES member 2026-03-10T13:42:14.716 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:14 vm00 bash[20748]: cluster 2026-03-10T13:42:14.203046+0000 mon.a (mon.0) 15 : cluster [INF] Activating manager daemon a 2026-03-10T13:42:14.716 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:14 vm00 bash[20748]: cluster 2026-03-10T13:42:14.203046+0000 mon.a (mon.0) 15 : cluster [INF] Activating manager daemon a 2026-03-10T13:42:14.716 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:14 vm00 bash[20748]: cluster 2026-03-10T13:42:14.207364+0000 mon.a (mon.0) 16 : cluster [DBG] mgrmap e2: a(active, starting, since 0.00439295s) 2026-03-10T13:42:14.716 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:14 vm00 bash[20748]: cluster 2026-03-10T13:42:14.207364+0000 mon.a (mon.0) 16 : cluster [DBG] mgrmap e2: a(active, starting, since 0.00439295s) 2026-03-10T13:42:14.716 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:14 vm00 bash[20748]: audit 2026-03-10T13:42:14.210097+0000 mon.a (mon.0) 17 : audit [DBG] from='mgr.14100 192.168.123.100:0/1413049985' entity='mgr.a' cmd=[{"prefix": "mds metadata"}]: dispatch 2026-03-10T13:42:14.716 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:14 vm00 bash[20748]: audit 2026-03-10T13:42:14.210097+0000 mon.a (mon.0) 17 : audit [DBG] from='mgr.14100 192.168.123.100:0/1413049985' entity='mgr.a' cmd=[{"prefix": "mds metadata"}]: dispatch 2026-03-10T13:42:14.716 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:14 vm00 bash[20748]: audit 2026-03-10T13:42:14.210173+0000 mon.a (mon.0) 18 : audit [DBG] from='mgr.14100 192.168.123.100:0/1413049985' entity='mgr.a' cmd=[{"prefix": "osd metadata"}]: dispatch 2026-03-10T13:42:14.717 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:14 vm00 bash[20748]: audit 2026-03-10T13:42:14.210173+0000 mon.a (mon.0) 18 : audit [DBG] from='mgr.14100 192.168.123.100:0/1413049985' entity='mgr.a' cmd=[{"prefix": "osd metadata"}]: dispatch 2026-03-10T13:42:14.717 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:14 vm00 bash[20748]: audit 2026-03-10T13:42:14.210238+0000 mon.a (mon.0) 19 : audit [DBG] from='mgr.14100 192.168.123.100:0/1413049985' entity='mgr.a' cmd=[{"prefix": "mon metadata"}]: dispatch 2026-03-10T13:42:14.717 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:14 vm00 bash[20748]: audit 2026-03-10T13:42:14.210238+0000 mon.a (mon.0) 19 : audit [DBG] from='mgr.14100 192.168.123.100:0/1413049985' entity='mgr.a' cmd=[{"prefix": "mon metadata"}]: dispatch 2026-03-10T13:42:14.717 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:14 vm00 bash[20748]: audit 2026-03-10T13:42:14.210297+0000 mon.a (mon.0) 20 : audit [DBG] from='mgr.14100 192.168.123.100:0/1413049985' entity='mgr.a' cmd=[{"prefix": "mds metadata"}]: dispatch 2026-03-10T13:42:14.717 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:14 vm00 bash[20748]: audit 2026-03-10T13:42:14.210297+0000 mon.a (mon.0) 20 : audit [DBG] from='mgr.14100 192.168.123.100:0/1413049985' entity='mgr.a' cmd=[{"prefix": "mds metadata"}]: dispatch 2026-03-10T13:42:14.717 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:14 vm00 bash[20748]: audit 2026-03-10T13:42:14.210349+0000 mon.a (mon.0) 21 : audit [DBG] from='mgr.14100 192.168.123.100:0/1413049985' entity='mgr.a' cmd=[{"prefix": "osd metadata"}]: dispatch 2026-03-10T13:42:14.717 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:14 vm00 bash[20748]: audit 2026-03-10T13:42:14.210349+0000 mon.a (mon.0) 21 : audit [DBG] from='mgr.14100 192.168.123.100:0/1413049985' entity='mgr.a' cmd=[{"prefix": "osd metadata"}]: dispatch 2026-03-10T13:42:14.717 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:14 vm00 bash[20748]: audit 2026-03-10T13:42:14.210400+0000 mon.a (mon.0) 22 : audit [DBG] from='mgr.14100 192.168.123.100:0/1413049985' entity='mgr.a' cmd=[{"prefix": "mon metadata"}]: dispatch 2026-03-10T13:42:14.717 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:14 vm00 bash[20748]: audit 2026-03-10T13:42:14.210400+0000 mon.a (mon.0) 22 : audit [DBG] from='mgr.14100 192.168.123.100:0/1413049985' entity='mgr.a' cmd=[{"prefix": "mon metadata"}]: dispatch 2026-03-10T13:42:14.717 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:14 vm00 bash[20748]: audit 2026-03-10T13:42:14.211349+0000 mon.a (mon.0) 23 : audit [DBG] from='mgr.14100 192.168.123.100:0/1413049985' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-10T13:42:14.717 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:14 vm00 bash[20748]: audit 2026-03-10T13:42:14.211349+0000 mon.a (mon.0) 23 : audit [DBG] from='mgr.14100 192.168.123.100:0/1413049985' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-10T13:42:14.717 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:14 vm00 bash[20748]: audit 2026-03-10T13:42:14.211665+0000 mon.a (mon.0) 24 : audit [DBG] from='mgr.14100 192.168.123.100:0/1413049985' entity='mgr.a' cmd=[{"prefix": "mgr metadata", "who": "a", "id": "a"}]: dispatch 2026-03-10T13:42:14.717 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:14 vm00 bash[20748]: audit 2026-03-10T13:42:14.211665+0000 mon.a (mon.0) 24 : audit [DBG] from='mgr.14100 192.168.123.100:0/1413049985' entity='mgr.a' cmd=[{"prefix": "mgr metadata", "who": "a", "id": "a"}]: dispatch 2026-03-10T13:42:14.717 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:14 vm00 bash[20748]: cluster 2026-03-10T13:42:14.217577+0000 mon.a (mon.0) 25 : cluster [INF] Manager daemon a is now available 2026-03-10T13:42:14.717 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:14 vm00 bash[20748]: cluster 2026-03-10T13:42:14.217577+0000 mon.a (mon.0) 25 : cluster [INF] Manager daemon a is now available 2026-03-10T13:42:14.717 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:14 vm00 bash[20748]: audit 2026-03-10T13:42:14.228305+0000 mon.a (mon.0) 26 : audit [INF] from='mgr.14100 192.168.123.100:0/1413049985' entity='mgr.a' 2026-03-10T13:42:14.717 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:14 vm00 bash[20748]: audit 2026-03-10T13:42:14.228305+0000 mon.a (mon.0) 26 : audit [INF] from='mgr.14100 192.168.123.100:0/1413049985' entity='mgr.a' 2026-03-10T13:42:14.717 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:14 vm00 bash[20748]: audit 2026-03-10T13:42:14.230509+0000 mon.a (mon.0) 27 : audit [INF] from='mgr.14100 192.168.123.100:0/1413049985' entity='mgr.a' 2026-03-10T13:42:14.717 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:14 vm00 bash[20748]: audit 2026-03-10T13:42:14.230509+0000 mon.a (mon.0) 27 : audit [INF] from='mgr.14100 192.168.123.100:0/1413049985' entity='mgr.a' 2026-03-10T13:42:14.717 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:14 vm00 bash[20748]: audit 2026-03-10T13:42:14.232455+0000 mon.a (mon.0) 28 : audit [INF] from='mgr.14100 192.168.123.100:0/1413049985' entity='mgr.a' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/a/mirror_snapshot_schedule"}]: dispatch 2026-03-10T13:42:14.717 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:14 vm00 bash[20748]: audit 2026-03-10T13:42:14.232455+0000 mon.a (mon.0) 28 : audit [INF] from='mgr.14100 192.168.123.100:0/1413049985' entity='mgr.a' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/a/mirror_snapshot_schedule"}]: dispatch 2026-03-10T13:42:14.717 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:14 vm00 bash[20748]: audit 2026-03-10T13:42:14.233616+0000 mon.a (mon.0) 29 : audit [INF] from='mgr.14100 192.168.123.100:0/1413049985' entity='mgr.a' 2026-03-10T13:42:14.717 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:14 vm00 bash[20748]: audit 2026-03-10T13:42:14.233616+0000 mon.a (mon.0) 29 : audit [INF] from='mgr.14100 192.168.123.100:0/1413049985' entity='mgr.a' 2026-03-10T13:42:14.717 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:14 vm00 bash[20748]: audit 2026-03-10T13:42:14.234938+0000 mon.a (mon.0) 30 : audit [INF] from='mgr.14100 192.168.123.100:0/1413049985' entity='mgr.a' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/a/trash_purge_schedule"}]: dispatch 2026-03-10T13:42:14.717 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:14 vm00 bash[20748]: audit 2026-03-10T13:42:14.234938+0000 mon.a (mon.0) 30 : audit [INF] from='mgr.14100 192.168.123.100:0/1413049985' entity='mgr.a' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/a/trash_purge_schedule"}]: dispatch 2026-03-10T13:42:15.736 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout 2026-03-10T13:42:15.736 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout { 2026-03-10T13:42:15.736 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "fsid": "c9620084-1c86-11f1-bcc5-e3fb709eab0a", 2026-03-10T13:42:15.736 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "health": { 2026-03-10T13:42:15.736 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "status": "HEALTH_OK", 2026-03-10T13:42:15.736 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "checks": {}, 2026-03-10T13:42:15.736 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "mutes": [] 2026-03-10T13:42:15.736 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout }, 2026-03-10T13:42:15.736 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "election_epoch": 5, 2026-03-10T13:42:15.736 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "quorum": [ 2026-03-10T13:42:15.736 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout 0 2026-03-10T13:42:15.737 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout ], 2026-03-10T13:42:15.737 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "quorum_names": [ 2026-03-10T13:42:15.737 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "a" 2026-03-10T13:42:15.737 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout ], 2026-03-10T13:42:15.737 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "quorum_age": 5, 2026-03-10T13:42:15.737 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "monmap": { 2026-03-10T13:42:15.737 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "epoch": 1, 2026-03-10T13:42:15.737 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "min_mon_release_name": "squid", 2026-03-10T13:42:15.737 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "num_mons": 1 2026-03-10T13:42:15.737 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout }, 2026-03-10T13:42:15.737 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "osdmap": { 2026-03-10T13:42:15.737 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "epoch": 1, 2026-03-10T13:42:15.737 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "num_osds": 0, 2026-03-10T13:42:15.737 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "num_up_osds": 0, 2026-03-10T13:42:15.737 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "osd_up_since": 0, 2026-03-10T13:42:15.737 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "num_in_osds": 0, 2026-03-10T13:42:15.737 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "osd_in_since": 0, 2026-03-10T13:42:15.737 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "num_remapped_pgs": 0 2026-03-10T13:42:15.738 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout }, 2026-03-10T13:42:15.738 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "pgmap": { 2026-03-10T13:42:15.738 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "pgs_by_state": [], 2026-03-10T13:42:15.738 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "num_pgs": 0, 2026-03-10T13:42:15.738 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "num_pools": 0, 2026-03-10T13:42:15.738 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "num_objects": 0, 2026-03-10T13:42:15.738 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "data_bytes": 0, 2026-03-10T13:42:15.738 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "bytes_used": 0, 2026-03-10T13:42:15.738 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "bytes_avail": 0, 2026-03-10T13:42:15.738 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "bytes_total": 0 2026-03-10T13:42:15.738 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout }, 2026-03-10T13:42:15.738 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "fsmap": { 2026-03-10T13:42:15.738 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "epoch": 1, 2026-03-10T13:42:15.738 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "btime": "2026-03-10T13:42:08:641766+0000", 2026-03-10T13:42:15.738 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "by_rank": [], 2026-03-10T13:42:15.738 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "up:standby": 0 2026-03-10T13:42:15.738 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout }, 2026-03-10T13:42:15.738 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "mgrmap": { 2026-03-10T13:42:15.738 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "available": true, 2026-03-10T13:42:15.738 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "num_standbys": 0, 2026-03-10T13:42:15.738 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "modules": [ 2026-03-10T13:42:15.738 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "iostat", 2026-03-10T13:42:15.738 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "nfs", 2026-03-10T13:42:15.738 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "restful" 2026-03-10T13:42:15.738 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout ], 2026-03-10T13:42:15.738 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "services": {} 2026-03-10T13:42:15.738 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout }, 2026-03-10T13:42:15.738 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "servicemap": { 2026-03-10T13:42:15.738 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "epoch": 1, 2026-03-10T13:42:15.738 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "modified": "2026-03-10T13:42:08.642571+0000", 2026-03-10T13:42:15.738 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "services": {} 2026-03-10T13:42:15.738 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout }, 2026-03-10T13:42:15.738 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "progress_events": {} 2026-03-10T13:42:15.738 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout } 2026-03-10T13:42:15.738 INFO:teuthology.orchestra.run.vm00.stdout:mgr is available 2026-03-10T13:42:16.015 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout 2026-03-10T13:42:16.015 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout [global] 2026-03-10T13:42:16.015 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout fsid = c9620084-1c86-11f1-bcc5-e3fb709eab0a 2026-03-10T13:42:16.015 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout mon_cluster_log_file_level = debug 2026-03-10T13:42:16.015 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout mon_host = [v2:192.168.123.100:3300,v1:192.168.123.100:6789] 2026-03-10T13:42:16.015 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout mon_osd_allow_pg_remap = true 2026-03-10T13:42:16.015 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout mon_osd_allow_primary_affinity = true 2026-03-10T13:42:16.015 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout mon_warn_on_no_sortbitwise = false 2026-03-10T13:42:16.015 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout osd_crush_chooseleaf_type = 0 2026-03-10T13:42:16.015 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout 2026-03-10T13:42:16.015 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout [mgr] 2026-03-10T13:42:16.015 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout mgr/telemetry/nag = false 2026-03-10T13:42:16.015 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout 2026-03-10T13:42:16.015 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout [osd] 2026-03-10T13:42:16.015 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout osd_map_max_advance = 10 2026-03-10T13:42:16.015 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout osd_sloppy_crc = true 2026-03-10T13:42:16.015 INFO:teuthology.orchestra.run.vm00.stdout:Enabling cephadm module... 2026-03-10T13:42:16.458 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:16 vm00 bash[20748]: cluster 2026-03-10T13:42:15.212458+0000 mon.a (mon.0) 31 : cluster [DBG] mgrmap e3: a(active, since 1.00949s) 2026-03-10T13:42:16.459 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:16 vm00 bash[20748]: cluster 2026-03-10T13:42:15.212458+0000 mon.a (mon.0) 31 : cluster [DBG] mgrmap e3: a(active, since 1.00949s) 2026-03-10T13:42:16.459 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:16 vm00 bash[20748]: audit 2026-03-10T13:42:15.701495+0000 mon.a (mon.0) 32 : audit [DBG] from='client.? 192.168.123.100:0/3957218727' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch 2026-03-10T13:42:16.459 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:16 vm00 bash[20748]: audit 2026-03-10T13:42:15.701495+0000 mon.a (mon.0) 32 : audit [DBG] from='client.? 192.168.123.100:0/3957218727' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch 2026-03-10T13:42:16.459 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:16 vm00 bash[20748]: audit 2026-03-10T13:42:15.974691+0000 mon.a (mon.0) 33 : audit [INF] from='client.? 192.168.123.100:0/3269230592' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch 2026-03-10T13:42:16.459 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:16 vm00 bash[20748]: audit 2026-03-10T13:42:15.974691+0000 mon.a (mon.0) 33 : audit [INF] from='client.? 192.168.123.100:0/3269230592' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch 2026-03-10T13:42:16.459 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:16 vm00 bash[20748]: audit 2026-03-10T13:42:15.977243+0000 mon.a (mon.0) 34 : audit [INF] from='client.? 192.168.123.100:0/3269230592' entity='client.admin' cmd='[{"prefix": "config assimilate-conf"}]': finished 2026-03-10T13:42:16.459 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:16 vm00 bash[20748]: audit 2026-03-10T13:42:15.977243+0000 mon.a (mon.0) 34 : audit [INF] from='client.? 192.168.123.100:0/3269230592' entity='client.admin' cmd='[{"prefix": "config assimilate-conf"}]': finished 2026-03-10T13:42:17.258 INFO:journalctl@ceph.mgr.a.vm00.stdout:Mar 10 13:42:17 vm00 bash[21015]: ignoring --setuser ceph since I am not root 2026-03-10T13:42:17.258 INFO:journalctl@ceph.mgr.a.vm00.stdout:Mar 10 13:42:17 vm00 bash[21015]: ignoring --setgroup ceph since I am not root 2026-03-10T13:42:17.258 INFO:journalctl@ceph.mgr.a.vm00.stdout:Mar 10 13:42:17 vm00 bash[21015]: debug 2026-03-10T13:42:17.092+0000 7fdd8277e140 -1 mgr[py] Module status has missing NOTIFY_TYPES member 2026-03-10T13:42:17.258 INFO:journalctl@ceph.mgr.a.vm00.stdout:Mar 10 13:42:17 vm00 bash[21015]: debug 2026-03-10T13:42:17.132+0000 7fdd8277e140 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member 2026-03-10T13:42:17.258 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:17 vm00 bash[20748]: audit 2026-03-10T13:42:16.258956+0000 mon.a (mon.0) 35 : audit [INF] from='client.? 192.168.123.100:0/2783189308' entity='client.admin' cmd=[{"prefix": "mgr module enable", "module": "cephadm"}]: dispatch 2026-03-10T13:42:17.258 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:17 vm00 bash[20748]: audit 2026-03-10T13:42:16.258956+0000 mon.a (mon.0) 35 : audit [INF] from='client.? 192.168.123.100:0/2783189308' entity='client.admin' cmd=[{"prefix": "mgr module enable", "module": "cephadm"}]: dispatch 2026-03-10T13:42:17.258 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:17 vm00 bash[20748]: audit 2026-03-10T13:42:16.978789+0000 mon.a (mon.0) 36 : audit [INF] from='client.? 192.168.123.100:0/2783189308' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "cephadm"}]': finished 2026-03-10T13:42:17.258 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:17 vm00 bash[20748]: audit 2026-03-10T13:42:16.978789+0000 mon.a (mon.0) 36 : audit [INF] from='client.? 192.168.123.100:0/2783189308' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "cephadm"}]': finished 2026-03-10T13:42:17.258 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:17 vm00 bash[20748]: cluster 2026-03-10T13:42:16.983709+0000 mon.a (mon.0) 37 : cluster [DBG] mgrmap e4: a(active, since 2s) 2026-03-10T13:42:17.258 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:17 vm00 bash[20748]: cluster 2026-03-10T13:42:16.983709+0000 mon.a (mon.0) 37 : cluster [DBG] mgrmap e4: a(active, since 2s) 2026-03-10T13:42:17.413 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout { 2026-03-10T13:42:17.413 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "epoch": 4, 2026-03-10T13:42:17.413 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "available": true, 2026-03-10T13:42:17.413 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "active_name": "a", 2026-03-10T13:42:17.413 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "num_standby": 0 2026-03-10T13:42:17.413 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout } 2026-03-10T13:42:17.413 INFO:teuthology.orchestra.run.vm00.stdout:Waiting for the mgr to restart... 2026-03-10T13:42:17.414 INFO:teuthology.orchestra.run.vm00.stdout:Waiting for mgr epoch 4... 2026-03-10T13:42:17.571 INFO:journalctl@ceph.mgr.a.vm00.stdout:Mar 10 13:42:17 vm00 bash[21015]: debug 2026-03-10T13:42:17.256+0000 7fdd8277e140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member 2026-03-10T13:42:17.966 INFO:journalctl@ceph.mgr.a.vm00.stdout:Mar 10 13:42:17 vm00 bash[21015]: debug 2026-03-10T13:42:17.568+0000 7fdd8277e140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member 2026-03-10T13:42:18.351 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:18 vm00 bash[20748]: audit 2026-03-10T13:42:17.339816+0000 mon.a (mon.0) 38 : audit [DBG] from='client.? 192.168.123.100:0/3329532155' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch 2026-03-10T13:42:18.351 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:18 vm00 bash[20748]: audit 2026-03-10T13:42:17.339816+0000 mon.a (mon.0) 38 : audit [DBG] from='client.? 192.168.123.100:0/3329532155' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch 2026-03-10T13:42:18.351 INFO:journalctl@ceph.mgr.a.vm00.stdout:Mar 10 13:42:18 vm00 bash[21015]: debug 2026-03-10T13:42:18.004+0000 7fdd8277e140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member 2026-03-10T13:42:18.351 INFO:journalctl@ceph.mgr.a.vm00.stdout:Mar 10 13:42:18 vm00 bash[21015]: debug 2026-03-10T13:42:18.092+0000 7fdd8277e140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member 2026-03-10T13:42:18.351 INFO:journalctl@ceph.mgr.a.vm00.stdout:Mar 10 13:42:18 vm00 bash[21015]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode. 2026-03-10T13:42:18.351 INFO:journalctl@ceph.mgr.a.vm00.stdout:Mar 10 13:42:18 vm00 bash[21015]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve. 2026-03-10T13:42:18.351 INFO:journalctl@ceph.mgr.a.vm00.stdout:Mar 10 13:42:18 vm00 bash[21015]: from numpy import show_config as show_numpy_config 2026-03-10T13:42:18.351 INFO:journalctl@ceph.mgr.a.vm00.stdout:Mar 10 13:42:18 vm00 bash[21015]: debug 2026-03-10T13:42:18.212+0000 7fdd8277e140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member 2026-03-10T13:42:18.716 INFO:journalctl@ceph.mgr.a.vm00.stdout:Mar 10 13:42:18 vm00 bash[21015]: debug 2026-03-10T13:42:18.348+0000 7fdd8277e140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member 2026-03-10T13:42:18.716 INFO:journalctl@ceph.mgr.a.vm00.stdout:Mar 10 13:42:18 vm00 bash[21015]: debug 2026-03-10T13:42:18.384+0000 7fdd8277e140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member 2026-03-10T13:42:18.716 INFO:journalctl@ceph.mgr.a.vm00.stdout:Mar 10 13:42:18 vm00 bash[21015]: debug 2026-03-10T13:42:18.424+0000 7fdd8277e140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member 2026-03-10T13:42:18.716 INFO:journalctl@ceph.mgr.a.vm00.stdout:Mar 10 13:42:18 vm00 bash[21015]: debug 2026-03-10T13:42:18.464+0000 7fdd8277e140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member 2026-03-10T13:42:18.716 INFO:journalctl@ceph.mgr.a.vm00.stdout:Mar 10 13:42:18 vm00 bash[21015]: debug 2026-03-10T13:42:18.516+0000 7fdd8277e140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member 2026-03-10T13:42:19.181 INFO:journalctl@ceph.mgr.a.vm00.stdout:Mar 10 13:42:18 vm00 bash[21015]: debug 2026-03-10T13:42:18.928+0000 7fdd8277e140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member 2026-03-10T13:42:19.181 INFO:journalctl@ceph.mgr.a.vm00.stdout:Mar 10 13:42:18 vm00 bash[21015]: debug 2026-03-10T13:42:18.960+0000 7fdd8277e140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member 2026-03-10T13:42:19.181 INFO:journalctl@ceph.mgr.a.vm00.stdout:Mar 10 13:42:19 vm00 bash[21015]: debug 2026-03-10T13:42:19.000+0000 7fdd8277e140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member 2026-03-10T13:42:19.181 INFO:journalctl@ceph.mgr.a.vm00.stdout:Mar 10 13:42:19 vm00 bash[21015]: debug 2026-03-10T13:42:19.140+0000 7fdd8277e140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member 2026-03-10T13:42:19.466 INFO:journalctl@ceph.mgr.a.vm00.stdout:Mar 10 13:42:19 vm00 bash[21015]: debug 2026-03-10T13:42:19.176+0000 7fdd8277e140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member 2026-03-10T13:42:19.466 INFO:journalctl@ceph.mgr.a.vm00.stdout:Mar 10 13:42:19 vm00 bash[21015]: debug 2026-03-10T13:42:19.216+0000 7fdd8277e140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member 2026-03-10T13:42:19.466 INFO:journalctl@ceph.mgr.a.vm00.stdout:Mar 10 13:42:19 vm00 bash[21015]: debug 2026-03-10T13:42:19.328+0000 7fdd8277e140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member 2026-03-10T13:42:19.747 INFO:journalctl@ceph.mgr.a.vm00.stdout:Mar 10 13:42:19 vm00 bash[21015]: debug 2026-03-10T13:42:19.488+0000 7fdd8277e140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member 2026-03-10T13:42:19.747 INFO:journalctl@ceph.mgr.a.vm00.stdout:Mar 10 13:42:19 vm00 bash[21015]: debug 2026-03-10T13:42:19.668+0000 7fdd8277e140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member 2026-03-10T13:42:19.747 INFO:journalctl@ceph.mgr.a.vm00.stdout:Mar 10 13:42:19 vm00 bash[21015]: debug 2026-03-10T13:42:19.700+0000 7fdd8277e140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member 2026-03-10T13:42:20.127 INFO:journalctl@ceph.mgr.a.vm00.stdout:Mar 10 13:42:19 vm00 bash[21015]: debug 2026-03-10T13:42:19.744+0000 7fdd8277e140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member 2026-03-10T13:42:20.127 INFO:journalctl@ceph.mgr.a.vm00.stdout:Mar 10 13:42:19 vm00 bash[21015]: debug 2026-03-10T13:42:19.896+0000 7fdd8277e140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member 2026-03-10T13:42:20.466 INFO:journalctl@ceph.mgr.a.vm00.stdout:Mar 10 13:42:20 vm00 bash[21015]: debug 2026-03-10T13:42:20.124+0000 7fdd8277e140 -1 mgr[py] Module snap_schedule has missing NOTIFY_TYPES member 2026-03-10T13:42:20.466 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:20 vm00 bash[20748]: cluster 2026-03-10T13:42:20.129919+0000 mon.a (mon.0) 39 : cluster [INF] Active manager daemon a restarted 2026-03-10T13:42:20.466 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:20 vm00 bash[20748]: cluster 2026-03-10T13:42:20.129919+0000 mon.a (mon.0) 39 : cluster [INF] Active manager daemon a restarted 2026-03-10T13:42:20.466 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:20 vm00 bash[20748]: cluster 2026-03-10T13:42:20.130407+0000 mon.a (mon.0) 40 : cluster [INF] Activating manager daemon a 2026-03-10T13:42:20.466 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:20 vm00 bash[20748]: cluster 2026-03-10T13:42:20.130407+0000 mon.a (mon.0) 40 : cluster [INF] Activating manager daemon a 2026-03-10T13:42:20.466 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:20 vm00 bash[20748]: cluster 2026-03-10T13:42:20.136700+0000 mon.a (mon.0) 41 : cluster [DBG] osdmap e2: 0 total, 0 up, 0 in 2026-03-10T13:42:20.466 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:20 vm00 bash[20748]: cluster 2026-03-10T13:42:20.136700+0000 mon.a (mon.0) 41 : cluster [DBG] osdmap e2: 0 total, 0 up, 0 in 2026-03-10T13:42:20.467 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:20 vm00 bash[20748]: cluster 2026-03-10T13:42:20.136875+0000 mon.a (mon.0) 42 : cluster [DBG] mgrmap e5: a(active, starting, since 0.00658721s) 2026-03-10T13:42:20.467 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:20 vm00 bash[20748]: cluster 2026-03-10T13:42:20.136875+0000 mon.a (mon.0) 42 : cluster [DBG] mgrmap e5: a(active, starting, since 0.00658721s) 2026-03-10T13:42:20.467 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:20 vm00 bash[20748]: audit 2026-03-10T13:42:20.138919+0000 mon.a (mon.0) 43 : audit [DBG] from='mgr.14118 192.168.123.100:0/997530128' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-10T13:42:20.467 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:20 vm00 bash[20748]: audit 2026-03-10T13:42:20.138919+0000 mon.a (mon.0) 43 : audit [DBG] from='mgr.14118 192.168.123.100:0/997530128' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-10T13:42:20.467 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:20 vm00 bash[20748]: audit 2026-03-10T13:42:20.140424+0000 mon.a (mon.0) 44 : audit [DBG] from='mgr.14118 192.168.123.100:0/997530128' entity='mgr.a' cmd=[{"prefix": "mgr metadata", "who": "a", "id": "a"}]: dispatch 2026-03-10T13:42:20.467 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:20 vm00 bash[20748]: audit 2026-03-10T13:42:20.140424+0000 mon.a (mon.0) 44 : audit [DBG] from='mgr.14118 192.168.123.100:0/997530128' entity='mgr.a' cmd=[{"prefix": "mgr metadata", "who": "a", "id": "a"}]: dispatch 2026-03-10T13:42:20.467 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:20 vm00 bash[20748]: audit 2026-03-10T13:42:20.140923+0000 mon.a (mon.0) 45 : audit [DBG] from='mgr.14118 192.168.123.100:0/997530128' entity='mgr.a' cmd=[{"prefix": "mds metadata"}]: dispatch 2026-03-10T13:42:20.467 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:20 vm00 bash[20748]: audit 2026-03-10T13:42:20.140923+0000 mon.a (mon.0) 45 : audit [DBG] from='mgr.14118 192.168.123.100:0/997530128' entity='mgr.a' cmd=[{"prefix": "mds metadata"}]: dispatch 2026-03-10T13:42:20.467 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:20 vm00 bash[20748]: audit 2026-03-10T13:42:20.141259+0000 mon.a (mon.0) 46 : audit [DBG] from='mgr.14118 192.168.123.100:0/997530128' entity='mgr.a' cmd=[{"prefix": "osd metadata"}]: dispatch 2026-03-10T13:42:20.467 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:20 vm00 bash[20748]: audit 2026-03-10T13:42:20.141259+0000 mon.a (mon.0) 46 : audit [DBG] from='mgr.14118 192.168.123.100:0/997530128' entity='mgr.a' cmd=[{"prefix": "osd metadata"}]: dispatch 2026-03-10T13:42:20.467 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:20 vm00 bash[20748]: audit 2026-03-10T13:42:20.141577+0000 mon.a (mon.0) 47 : audit [DBG] from='mgr.14118 192.168.123.100:0/997530128' entity='mgr.a' cmd=[{"prefix": "mon metadata"}]: dispatch 2026-03-10T13:42:20.467 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:20 vm00 bash[20748]: audit 2026-03-10T13:42:20.141577+0000 mon.a (mon.0) 47 : audit [DBG] from='mgr.14118 192.168.123.100:0/997530128' entity='mgr.a' cmd=[{"prefix": "mon metadata"}]: dispatch 2026-03-10T13:42:20.467 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:20 vm00 bash[20748]: cluster 2026-03-10T13:42:20.147498+0000 mon.a (mon.0) 48 : cluster [INF] Manager daemon a is now available 2026-03-10T13:42:20.467 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:20 vm00 bash[20748]: cluster 2026-03-10T13:42:20.147498+0000 mon.a (mon.0) 48 : cluster [INF] Manager daemon a is now available 2026-03-10T13:42:20.467 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:20 vm00 bash[20748]: audit 2026-03-10T13:42:20.157163+0000 mon.a (mon.0) 49 : audit [INF] from='mgr.14118 192.168.123.100:0/997530128' entity='mgr.a' 2026-03-10T13:42:20.467 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:20 vm00 bash[20748]: audit 2026-03-10T13:42:20.157163+0000 mon.a (mon.0) 49 : audit [INF] from='mgr.14118 192.168.123.100:0/997530128' entity='mgr.a' 2026-03-10T13:42:20.467 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:20 vm00 bash[20748]: audit 2026-03-10T13:42:20.161390+0000 mon.a (mon.0) 50 : audit [INF] from='mgr.14118 192.168.123.100:0/997530128' entity='mgr.a' 2026-03-10T13:42:20.467 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:20 vm00 bash[20748]: audit 2026-03-10T13:42:20.161390+0000 mon.a (mon.0) 50 : audit [INF] from='mgr.14118 192.168.123.100:0/997530128' entity='mgr.a' 2026-03-10T13:42:20.467 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:20 vm00 bash[20748]: audit 2026-03-10T13:42:20.175354+0000 mon.a (mon.0) 51 : audit [DBG] from='mgr.14118 192.168.123.100:0/997530128' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T13:42:20.467 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:20 vm00 bash[20748]: audit 2026-03-10T13:42:20.175354+0000 mon.a (mon.0) 51 : audit [DBG] from='mgr.14118 192.168.123.100:0/997530128' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T13:42:20.467 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:20 vm00 bash[20748]: audit 2026-03-10T13:42:20.177927+0000 mon.a (mon.0) 52 : audit [DBG] from='mgr.14118 192.168.123.100:0/997530128' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T13:42:20.467 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:20 vm00 bash[20748]: audit 2026-03-10T13:42:20.177927+0000 mon.a (mon.0) 52 : audit [DBG] from='mgr.14118 192.168.123.100:0/997530128' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T13:42:20.467 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:20 vm00 bash[20748]: audit 2026-03-10T13:42:20.179137+0000 mon.a (mon.0) 53 : audit [INF] from='mgr.14118 192.168.123.100:0/997530128' entity='mgr.a' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/a/mirror_snapshot_schedule"}]: dispatch 2026-03-10T13:42:20.467 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:20 vm00 bash[20748]: audit 2026-03-10T13:42:20.179137+0000 mon.a (mon.0) 53 : audit [INF] from='mgr.14118 192.168.123.100:0/997530128' entity='mgr.a' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/a/mirror_snapshot_schedule"}]: dispatch 2026-03-10T13:42:21.188 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout { 2026-03-10T13:42:21.188 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "mgrmap_epoch": 6, 2026-03-10T13:42:21.188 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "initialized": true 2026-03-10T13:42:21.188 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout } 2026-03-10T13:42:21.188 INFO:teuthology.orchestra.run.vm00.stdout:mgr epoch 4 is available 2026-03-10T13:42:21.188 INFO:teuthology.orchestra.run.vm00.stdout:Setting orchestrator backend to cephadm... 2026-03-10T13:42:21.466 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:21 vm00 bash[20748]: cephadm 2026-03-10T13:42:20.154273+0000 mgr.a (mgr.14118) 1 : cephadm [INF] Found migration_current of "None". Setting to last migration. 2026-03-10T13:42:21.466 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:21 vm00 bash[20748]: cephadm 2026-03-10T13:42:20.154273+0000 mgr.a (mgr.14118) 1 : cephadm [INF] Found migration_current of "None". Setting to last migration. 2026-03-10T13:42:21.467 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:21 vm00 bash[20748]: audit 2026-03-10T13:42:20.192082+0000 mon.a (mon.0) 54 : audit [INF] from='mgr.14118 192.168.123.100:0/997530128' entity='mgr.a' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/a/trash_purge_schedule"}]: dispatch 2026-03-10T13:42:21.467 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:21 vm00 bash[20748]: audit 2026-03-10T13:42:20.192082+0000 mon.a (mon.0) 54 : audit [INF] from='mgr.14118 192.168.123.100:0/997530128' entity='mgr.a' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/a/trash_purge_schedule"}]: dispatch 2026-03-10T13:42:21.467 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:21 vm00 bash[20748]: cluster 2026-03-10T13:42:21.140551+0000 mon.a (mon.0) 55 : cluster [DBG] mgrmap e6: a(active, since 1.01026s) 2026-03-10T13:42:21.467 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:21 vm00 bash[20748]: cluster 2026-03-10T13:42:21.140551+0000 mon.a (mon.0) 55 : cluster [DBG] mgrmap e6: a(active, since 1.01026s) 2026-03-10T13:42:21.890 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout value unchanged 2026-03-10T13:42:21.890 INFO:teuthology.orchestra.run.vm00.stdout:Generating ssh key... 2026-03-10T13:42:22.414 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:22 vm00 bash[20748]: audit 2026-03-10T13:42:21.142551+0000 mgr.a (mgr.14118) 2 : audit [DBG] from='client.14122 -' entity='client.admin' cmd=[{"prefix": "get_command_descriptions"}]: dispatch 2026-03-10T13:42:22.414 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:22 vm00 bash[20748]: audit 2026-03-10T13:42:21.142551+0000 mgr.a (mgr.14118) 2 : audit [DBG] from='client.14122 -' entity='client.admin' cmd=[{"prefix": "get_command_descriptions"}]: dispatch 2026-03-10T13:42:22.414 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:22 vm00 bash[20748]: audit 2026-03-10T13:42:21.146784+0000 mgr.a (mgr.14118) 3 : audit [DBG] from='client.14122 -' entity='client.admin' cmd=[{"prefix": "mgr_status"}]: dispatch 2026-03-10T13:42:22.414 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:22 vm00 bash[20748]: audit 2026-03-10T13:42:21.146784+0000 mgr.a (mgr.14118) 3 : audit [DBG] from='client.14122 -' entity='client.admin' cmd=[{"prefix": "mgr_status"}]: dispatch 2026-03-10T13:42:22.414 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:22 vm00 bash[20748]: audit 2026-03-10T13:42:21.225053+0000 mon.a (mon.0) 56 : audit [INF] from='mgr.14118 192.168.123.100:0/997530128' entity='mgr.a' 2026-03-10T13:42:22.414 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:22 vm00 bash[20748]: audit 2026-03-10T13:42:21.225053+0000 mon.a (mon.0) 56 : audit [INF] from='mgr.14118 192.168.123.100:0/997530128' entity='mgr.a' 2026-03-10T13:42:22.414 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:22 vm00 bash[20748]: audit 2026-03-10T13:42:21.229478+0000 mon.a (mon.0) 57 : audit [INF] from='mgr.14118 192.168.123.100:0/997530128' entity='mgr.a' 2026-03-10T13:42:22.414 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:22 vm00 bash[20748]: audit 2026-03-10T13:42:21.229478+0000 mon.a (mon.0) 57 : audit [INF] from='mgr.14118 192.168.123.100:0/997530128' entity='mgr.a' 2026-03-10T13:42:22.414 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:22 vm00 bash[20748]: audit 2026-03-10T13:42:21.449061+0000 mgr.a (mgr.14118) 4 : audit [DBG] from='client.14130 -' entity='client.admin' cmd=[{"prefix": "orch set backend", "module_name": "cephadm", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T13:42:22.414 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:22 vm00 bash[20748]: audit 2026-03-10T13:42:21.449061+0000 mgr.a (mgr.14118) 4 : audit [DBG] from='client.14130 -' entity='client.admin' cmd=[{"prefix": "orch set backend", "module_name": "cephadm", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T13:42:22.414 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:22 vm00 bash[20748]: audit 2026-03-10T13:42:21.514112+0000 mon.a (mon.0) 58 : audit [INF] from='mgr.14118 192.168.123.100:0/997530128' entity='mgr.a' 2026-03-10T13:42:22.414 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:22 vm00 bash[20748]: audit 2026-03-10T13:42:21.514112+0000 mon.a (mon.0) 58 : audit [INF] from='mgr.14118 192.168.123.100:0/997530128' entity='mgr.a' 2026-03-10T13:42:22.414 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:22 vm00 bash[20748]: audit 2026-03-10T13:42:21.521278+0000 mon.a (mon.0) 59 : audit [DBG] from='mgr.14118 192.168.123.100:0/997530128' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T13:42:22.414 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:22 vm00 bash[20748]: audit 2026-03-10T13:42:21.521278+0000 mon.a (mon.0) 59 : audit [DBG] from='mgr.14118 192.168.123.100:0/997530128' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T13:42:22.414 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:22 vm00 bash[20748]: cephadm 2026-03-10T13:42:21.808683+0000 mgr.a (mgr.14118) 5 : cephadm [INF] [10/Mar/2026:13:42:21] ENGINE Bus STARTING 2026-03-10T13:42:22.414 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:22 vm00 bash[20748]: cephadm 2026-03-10T13:42:21.808683+0000 mgr.a (mgr.14118) 5 : cephadm [INF] [10/Mar/2026:13:42:21] ENGINE Bus STARTING 2026-03-10T13:42:22.414 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:22 vm00 bash[20748]: audit 2026-03-10T13:42:21.856651+0000 mgr.a (mgr.14118) 6 : audit [DBG] from='client.14132 -' entity='client.admin' cmd=[{"prefix": "cephadm set-user", "user": "root", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T13:42:22.414 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:22 vm00 bash[20748]: audit 2026-03-10T13:42:21.856651+0000 mgr.a (mgr.14118) 6 : audit [DBG] from='client.14132 -' entity='client.admin' cmd=[{"prefix": "cephadm set-user", "user": "root", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T13:42:22.414 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:22 vm00 bash[20748]: cephadm 2026-03-10T13:42:21.921254+0000 mgr.a (mgr.14118) 7 : cephadm [INF] [10/Mar/2026:13:42:21] ENGINE Serving on https://192.168.123.100:7150 2026-03-10T13:42:22.414 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:22 vm00 bash[20748]: cephadm 2026-03-10T13:42:21.921254+0000 mgr.a (mgr.14118) 7 : cephadm [INF] [10/Mar/2026:13:42:21] ENGINE Serving on https://192.168.123.100:7150 2026-03-10T13:42:22.414 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:22 vm00 bash[20748]: cephadm 2026-03-10T13:42:21.921798+0000 mgr.a (mgr.14118) 8 : cephadm [INF] [10/Mar/2026:13:42:21] ENGINE Client ('192.168.123.100', 45222) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)') 2026-03-10T13:42:22.414 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:22 vm00 bash[20748]: cephadm 2026-03-10T13:42:21.921798+0000 mgr.a (mgr.14118) 8 : cephadm [INF] [10/Mar/2026:13:42:21] ENGINE Client ('192.168.123.100', 45222) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)') 2026-03-10T13:42:22.414 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:22 vm00 bash[20748]: cephadm 2026-03-10T13:42:22.022182+0000 mgr.a (mgr.14118) 9 : cephadm [INF] [10/Mar/2026:13:42:22] ENGINE Serving on http://192.168.123.100:8765 2026-03-10T13:42:22.414 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:22 vm00 bash[20748]: cephadm 2026-03-10T13:42:22.022182+0000 mgr.a (mgr.14118) 9 : cephadm [INF] [10/Mar/2026:13:42:22] ENGINE Serving on http://192.168.123.100:8765 2026-03-10T13:42:22.414 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:22 vm00 bash[20748]: cephadm 2026-03-10T13:42:22.022220+0000 mgr.a (mgr.14118) 10 : cephadm [INF] [10/Mar/2026:13:42:22] ENGINE Bus STARTED 2026-03-10T13:42:22.415 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:22 vm00 bash[20748]: cephadm 2026-03-10T13:42:22.022220+0000 mgr.a (mgr.14118) 10 : cephadm [INF] [10/Mar/2026:13:42:22] ENGINE Bus STARTED 2026-03-10T13:42:22.415 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:22 vm00 bash[20748]: audit 2026-03-10T13:42:22.022707+0000 mon.a (mon.0) 60 : audit [DBG] from='mgr.14118 192.168.123.100:0/997530128' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T13:42:22.415 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:22 vm00 bash[20748]: audit 2026-03-10T13:42:22.022707+0000 mon.a (mon.0) 60 : audit [DBG] from='mgr.14118 192.168.123.100:0/997530128' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T13:42:22.415 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:22 vm00 bash[20748]: audit 2026-03-10T13:42:22.130062+0000 mon.a (mon.0) 61 : audit [INF] from='mgr.14118 192.168.123.100:0/997530128' entity='mgr.a' 2026-03-10T13:42:22.415 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:22 vm00 bash[20748]: audit 2026-03-10T13:42:22.130062+0000 mon.a (mon.0) 61 : audit [INF] from='mgr.14118 192.168.123.100:0/997530128' entity='mgr.a' 2026-03-10T13:42:22.415 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:22 vm00 bash[20748]: audit 2026-03-10T13:42:22.133691+0000 mon.a (mon.0) 62 : audit [INF] from='mgr.14118 192.168.123.100:0/997530128' entity='mgr.a' 2026-03-10T13:42:22.415 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:22 vm00 bash[20748]: audit 2026-03-10T13:42:22.133691+0000 mon.a (mon.0) 62 : audit [INF] from='mgr.14118 192.168.123.100:0/997530128' entity='mgr.a' 2026-03-10T13:42:22.415 INFO:journalctl@ceph.mgr.a.vm00.stdout:Mar 10 13:42:22 vm00 bash[21015]: Generating public/private ed25519 key pair. 2026-03-10T13:42:22.415 INFO:journalctl@ceph.mgr.a.vm00.stdout:Mar 10 13:42:22 vm00 bash[21015]: Your identification has been saved in /tmp/tmp8uuff_l0/key 2026-03-10T13:42:22.415 INFO:journalctl@ceph.mgr.a.vm00.stdout:Mar 10 13:42:22 vm00 bash[21015]: Your public key has been saved in /tmp/tmp8uuff_l0/key.pub 2026-03-10T13:42:22.415 INFO:journalctl@ceph.mgr.a.vm00.stdout:Mar 10 13:42:22 vm00 bash[21015]: The key fingerprint is: 2026-03-10T13:42:22.415 INFO:journalctl@ceph.mgr.a.vm00.stdout:Mar 10 13:42:22 vm00 bash[21015]: SHA256:cUPd5AMhlayeKIeO9AVFxBbEmWTABOsvVlYprk/TZ6k ceph-c9620084-1c86-11f1-bcc5-e3fb709eab0a 2026-03-10T13:42:22.415 INFO:journalctl@ceph.mgr.a.vm00.stdout:Mar 10 13:42:22 vm00 bash[21015]: The key's randomart image is: 2026-03-10T13:42:22.415 INFO:journalctl@ceph.mgr.a.vm00.stdout:Mar 10 13:42:22 vm00 bash[21015]: +--[ED25519 256]--+ 2026-03-10T13:42:22.415 INFO:journalctl@ceph.mgr.a.vm00.stdout:Mar 10 13:42:22 vm00 bash[21015]: | .+oOB+o++=. | 2026-03-10T13:42:22.415 INFO:journalctl@ceph.mgr.a.vm00.stdout:Mar 10 13:42:22 vm00 bash[21015]: | ...Bo .+o. | 2026-03-10T13:42:22.415 INFO:journalctl@ceph.mgr.a.vm00.stdout:Mar 10 13:42:22 vm00 bash[21015]: | . .o+ o. o | 2026-03-10T13:42:22.415 INFO:journalctl@ceph.mgr.a.vm00.stdout:Mar 10 13:42:22 vm00 bash[21015]: | . ..o o.. . | 2026-03-10T13:42:22.415 INFO:journalctl@ceph.mgr.a.vm00.stdout:Mar 10 13:42:22 vm00 bash[21015]: | . +oSo . | 2026-03-10T13:42:22.415 INFO:journalctl@ceph.mgr.a.vm00.stdout:Mar 10 13:42:22 vm00 bash[21015]: | .=o.+ o. | 2026-03-10T13:42:22.415 INFO:journalctl@ceph.mgr.a.vm00.stdout:Mar 10 13:42:22 vm00 bash[21015]: | .++++. + | 2026-03-10T13:42:22.415 INFO:journalctl@ceph.mgr.a.vm00.stdout:Mar 10 13:42:22 vm00 bash[21015]: | ..+o. + | 2026-03-10T13:42:22.415 INFO:journalctl@ceph.mgr.a.vm00.stdout:Mar 10 13:42:22 vm00 bash[21015]: | . E | 2026-03-10T13:42:22.415 INFO:journalctl@ceph.mgr.a.vm00.stdout:Mar 10 13:42:22 vm00 bash[21015]: +----[SHA256]-----+ 2026-03-10T13:42:22.443 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIH8Ic0mYSIpAp6XC4fxwU+GL3nEG/NBEdqJfshZrn0KN ceph-c9620084-1c86-11f1-bcc5-e3fb709eab0a 2026-03-10T13:42:22.443 INFO:teuthology.orchestra.run.vm00.stdout:Wrote public SSH key to /home/ubuntu/cephtest/ceph.pub 2026-03-10T13:42:22.443 INFO:teuthology.orchestra.run.vm00.stdout:Adding key to root@localhost authorized_keys... 2026-03-10T13:42:22.443 INFO:teuthology.orchestra.run.vm00.stdout:Adding host vm00... 2026-03-10T13:42:23.259 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:23 vm00 bash[20748]: audit 2026-03-10T13:42:22.111468+0000 mgr.a (mgr.14118) 11 : audit [DBG] from='client.14134 -' entity='client.admin' cmd=[{"prefix": "cephadm generate-key", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T13:42:23.259 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:23 vm00 bash[20748]: audit 2026-03-10T13:42:22.111468+0000 mgr.a (mgr.14118) 11 : audit [DBG] from='client.14134 -' entity='client.admin' cmd=[{"prefix": "cephadm generate-key", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T13:42:23.260 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:23 vm00 bash[20748]: cephadm 2026-03-10T13:42:22.111658+0000 mgr.a (mgr.14118) 12 : cephadm [INF] Generating ssh key... 2026-03-10T13:42:23.260 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:23 vm00 bash[20748]: cephadm 2026-03-10T13:42:22.111658+0000 mgr.a (mgr.14118) 12 : cephadm [INF] Generating ssh key... 2026-03-10T13:42:23.260 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:23 vm00 bash[20748]: audit 2026-03-10T13:42:22.403946+0000 mgr.a (mgr.14118) 13 : audit [DBG] from='client.14136 -' entity='client.admin' cmd=[{"prefix": "cephadm get-pub-key", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T13:42:23.260 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:23 vm00 bash[20748]: audit 2026-03-10T13:42:22.403946+0000 mgr.a (mgr.14118) 13 : audit [DBG] from='client.14136 -' entity='client.admin' cmd=[{"prefix": "cephadm get-pub-key", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T13:42:23.260 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:23 vm00 bash[20748]: audit 2026-03-10T13:42:22.656992+0000 mgr.a (mgr.14118) 14 : audit [DBG] from='client.14138 -' entity='client.admin' cmd=[{"prefix": "orch host add", "hostname": "vm00", "addr": "192.168.123.100", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T13:42:23.260 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:23 vm00 bash[20748]: audit 2026-03-10T13:42:22.656992+0000 mgr.a (mgr.14118) 14 : audit [DBG] from='client.14138 -' entity='client.admin' cmd=[{"prefix": "orch host add", "hostname": "vm00", "addr": "192.168.123.100", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T13:42:23.260 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:23 vm00 bash[20748]: cluster 2026-03-10T13:42:23.133430+0000 mon.a (mon.0) 63 : cluster [DBG] mgrmap e7: a(active, since 3s) 2026-03-10T13:42:23.260 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:23 vm00 bash[20748]: cluster 2026-03-10T13:42:23.133430+0000 mon.a (mon.0) 63 : cluster [DBG] mgrmap e7: a(active, since 3s) 2026-03-10T13:42:24.466 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:24 vm00 bash[20748]: cephadm 2026-03-10T13:42:23.224175+0000 mgr.a (mgr.14118) 15 : cephadm [INF] Deploying cephadm binary to vm00 2026-03-10T13:42:24.466 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:24 vm00 bash[20748]: cephadm 2026-03-10T13:42:23.224175+0000 mgr.a (mgr.14118) 15 : cephadm [INF] Deploying cephadm binary to vm00 2026-03-10T13:42:24.555 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout Added host 'vm00' with addr '192.168.123.100' 2026-03-10T13:42:24.555 INFO:teuthology.orchestra.run.vm00.stdout:Deploying unmanaged mon service... 2026-03-10T13:42:24.875 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout Scheduled mon update... 2026-03-10T13:42:24.875 INFO:teuthology.orchestra.run.vm00.stdout:Deploying unmanaged mgr service... 2026-03-10T13:42:25.140 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout Scheduled mgr update... 2026-03-10T13:42:25.657 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:25 vm00 bash[20748]: audit 2026-03-10T13:42:24.492745+0000 mon.a (mon.0) 64 : audit [INF] from='mgr.14118 192.168.123.100:0/997530128' entity='mgr.a' 2026-03-10T13:42:25.657 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:25 vm00 bash[20748]: audit 2026-03-10T13:42:24.492745+0000 mon.a (mon.0) 64 : audit [INF] from='mgr.14118 192.168.123.100:0/997530128' entity='mgr.a' 2026-03-10T13:42:25.657 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:25 vm00 bash[20748]: cephadm 2026-03-10T13:42:24.493128+0000 mgr.a (mgr.14118) 16 : cephadm [INF] Added host vm00 2026-03-10T13:42:25.657 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:25 vm00 bash[20748]: cephadm 2026-03-10T13:42:24.493128+0000 mgr.a (mgr.14118) 16 : cephadm [INF] Added host vm00 2026-03-10T13:42:25.657 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:25 vm00 bash[20748]: audit 2026-03-10T13:42:24.495611+0000 mon.a (mon.0) 65 : audit [DBG] from='mgr.14118 192.168.123.100:0/997530128' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T13:42:25.657 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:25 vm00 bash[20748]: audit 2026-03-10T13:42:24.495611+0000 mon.a (mon.0) 65 : audit [DBG] from='mgr.14118 192.168.123.100:0/997530128' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T13:42:25.657 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:25 vm00 bash[20748]: audit 2026-03-10T13:42:24.836106+0000 mgr.a (mgr.14118) 17 : audit [DBG] from='client.14140 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mon", "unmanaged": true, "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T13:42:25.658 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:25 vm00 bash[20748]: audit 2026-03-10T13:42:24.836106+0000 mgr.a (mgr.14118) 17 : audit [DBG] from='client.14140 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mon", "unmanaged": true, "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T13:42:25.658 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:25 vm00 bash[20748]: cephadm 2026-03-10T13:42:24.837072+0000 mgr.a (mgr.14118) 18 : cephadm [INF] Saving service mon spec with placement count:5 2026-03-10T13:42:25.658 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:25 vm00 bash[20748]: cephadm 2026-03-10T13:42:24.837072+0000 mgr.a (mgr.14118) 18 : cephadm [INF] Saving service mon spec with placement count:5 2026-03-10T13:42:25.658 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:25 vm00 bash[20748]: audit 2026-03-10T13:42:24.840211+0000 mon.a (mon.0) 66 : audit [INF] from='mgr.14118 192.168.123.100:0/997530128' entity='mgr.a' 2026-03-10T13:42:25.658 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:25 vm00 bash[20748]: audit 2026-03-10T13:42:24.840211+0000 mon.a (mon.0) 66 : audit [INF] from='mgr.14118 192.168.123.100:0/997530128' entity='mgr.a' 2026-03-10T13:42:25.658 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:25 vm00 bash[20748]: audit 2026-03-10T13:42:25.105693+0000 mon.a (mon.0) 67 : audit [INF] from='mgr.14118 192.168.123.100:0/997530128' entity='mgr.a' 2026-03-10T13:42:25.658 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:25 vm00 bash[20748]: audit 2026-03-10T13:42:25.105693+0000 mon.a (mon.0) 67 : audit [INF] from='mgr.14118 192.168.123.100:0/997530128' entity='mgr.a' 2026-03-10T13:42:25.658 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:25 vm00 bash[20748]: audit 2026-03-10T13:42:25.359200+0000 mon.a (mon.0) 68 : audit [INF] from='client.? 192.168.123.100:0/1988724834' entity='client.admin' 2026-03-10T13:42:25.658 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:25 vm00 bash[20748]: audit 2026-03-10T13:42:25.359200+0000 mon.a (mon.0) 68 : audit [INF] from='client.? 192.168.123.100:0/1988724834' entity='client.admin' 2026-03-10T13:42:25.685 INFO:teuthology.orchestra.run.vm00.stdout:Enabling the dashboard module... 2026-03-10T13:42:26.966 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:26 vm00 bash[20748]: audit 2026-03-10T13:42:25.100999+0000 mgr.a (mgr.14118) 19 : audit [DBG] from='client.14142 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mgr", "unmanaged": true, "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T13:42:26.966 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:26 vm00 bash[20748]: audit 2026-03-10T13:42:25.100999+0000 mgr.a (mgr.14118) 19 : audit [DBG] from='client.14142 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mgr", "unmanaged": true, "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T13:42:26.966 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:26 vm00 bash[20748]: cephadm 2026-03-10T13:42:25.101734+0000 mgr.a (mgr.14118) 20 : cephadm [INF] Saving service mgr spec with placement count:2 2026-03-10T13:42:26.966 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:26 vm00 bash[20748]: cephadm 2026-03-10T13:42:25.101734+0000 mgr.a (mgr.14118) 20 : cephadm [INF] Saving service mgr spec with placement count:2 2026-03-10T13:42:26.966 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:26 vm00 bash[20748]: audit 2026-03-10T13:42:25.644193+0000 mon.a (mon.0) 69 : audit [INF] from='client.? 192.168.123.100:0/4183873511' entity='client.admin' 2026-03-10T13:42:26.966 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:26 vm00 bash[20748]: audit 2026-03-10T13:42:25.644193+0000 mon.a (mon.0) 69 : audit [INF] from='client.? 192.168.123.100:0/4183873511' entity='client.admin' 2026-03-10T13:42:26.966 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:26 vm00 bash[20748]: audit 2026-03-10T13:42:25.954844+0000 mon.a (mon.0) 70 : audit [INF] from='mgr.14118 192.168.123.100:0/997530128' entity='mgr.a' 2026-03-10T13:42:26.966 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:26 vm00 bash[20748]: audit 2026-03-10T13:42:25.954844+0000 mon.a (mon.0) 70 : audit [INF] from='mgr.14118 192.168.123.100:0/997530128' entity='mgr.a' 2026-03-10T13:42:26.967 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:26 vm00 bash[20748]: audit 2026-03-10T13:42:26.017557+0000 mon.a (mon.0) 71 : audit [INF] from='client.? 192.168.123.100:0/3089668869' entity='client.admin' cmd=[{"prefix": "mgr module enable", "module": "dashboard"}]: dispatch 2026-03-10T13:42:26.967 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:26 vm00 bash[20748]: audit 2026-03-10T13:42:26.017557+0000 mon.a (mon.0) 71 : audit [INF] from='client.? 192.168.123.100:0/3089668869' entity='client.admin' cmd=[{"prefix": "mgr module enable", "module": "dashboard"}]: dispatch 2026-03-10T13:42:26.967 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:26 vm00 bash[20748]: audit 2026-03-10T13:42:26.261673+0000 mon.a (mon.0) 72 : audit [INF] from='mgr.14118 192.168.123.100:0/997530128' entity='mgr.a' 2026-03-10T13:42:26.967 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:26 vm00 bash[20748]: audit 2026-03-10T13:42:26.261673+0000 mon.a (mon.0) 72 : audit [INF] from='mgr.14118 192.168.123.100:0/997530128' entity='mgr.a' 2026-03-10T13:42:27.254 INFO:journalctl@ceph.mgr.a.vm00.stdout:Mar 10 13:42:27 vm00 bash[21015]: ignoring --setuser ceph since I am not root 2026-03-10T13:42:27.254 INFO:journalctl@ceph.mgr.a.vm00.stdout:Mar 10 13:42:27 vm00 bash[21015]: ignoring --setgroup ceph since I am not root 2026-03-10T13:42:27.254 INFO:journalctl@ceph.mgr.a.vm00.stdout:Mar 10 13:42:27 vm00 bash[21015]: debug 2026-03-10T13:42:27.096+0000 7f3db1f08140 -1 mgr[py] Module status has missing NOTIFY_TYPES member 2026-03-10T13:42:27.254 INFO:journalctl@ceph.mgr.a.vm00.stdout:Mar 10 13:42:27 vm00 bash[21015]: debug 2026-03-10T13:42:27.136+0000 7f3db1f08140 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member 2026-03-10T13:42:27.254 INFO:journalctl@ceph.mgr.a.vm00.stdout:Mar 10 13:42:27 vm00 bash[21015]: debug 2026-03-10T13:42:27.252+0000 7f3db1f08140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member 2026-03-10T13:42:27.358 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout { 2026-03-10T13:42:27.358 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "epoch": 8, 2026-03-10T13:42:27.358 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "available": true, 2026-03-10T13:42:27.358 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "active_name": "a", 2026-03-10T13:42:27.358 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "num_standby": 0 2026-03-10T13:42:27.358 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout } 2026-03-10T13:42:27.358 INFO:teuthology.orchestra.run.vm00.stdout:Waiting for the mgr to restart... 2026-03-10T13:42:27.358 INFO:teuthology.orchestra.run.vm00.stdout:Waiting for mgr epoch 8... 2026-03-10T13:42:27.956 INFO:journalctl@ceph.mgr.a.vm00.stdout:Mar 10 13:42:27 vm00 bash[21015]: debug 2026-03-10T13:42:27.576+0000 7f3db1f08140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member 2026-03-10T13:42:28.216 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:27 vm00 bash[20748]: audit 2026-03-10T13:42:26.955756+0000 mon.a (mon.0) 73 : audit [INF] from='client.? 192.168.123.100:0/3089668869' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "dashboard"}]': finished 2026-03-10T13:42:28.216 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:27 vm00 bash[20748]: audit 2026-03-10T13:42:26.955756+0000 mon.a (mon.0) 73 : audit [INF] from='client.? 192.168.123.100:0/3089668869' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "dashboard"}]': finished 2026-03-10T13:42:28.216 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:27 vm00 bash[20748]: cluster 2026-03-10T13:42:26.958843+0000 mon.a (mon.0) 74 : cluster [DBG] mgrmap e8: a(active, since 6s) 2026-03-10T13:42:28.216 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:27 vm00 bash[20748]: cluster 2026-03-10T13:42:26.958843+0000 mon.a (mon.0) 74 : cluster [DBG] mgrmap e8: a(active, since 6s) 2026-03-10T13:42:28.216 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:27 vm00 bash[20748]: audit 2026-03-10T13:42:27.313786+0000 mon.a (mon.0) 75 : audit [DBG] from='client.? 192.168.123.100:0/463782601' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch 2026-03-10T13:42:28.216 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:27 vm00 bash[20748]: audit 2026-03-10T13:42:27.313786+0000 mon.a (mon.0) 75 : audit [DBG] from='client.? 192.168.123.100:0/463782601' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch 2026-03-10T13:42:28.216 INFO:journalctl@ceph.mgr.a.vm00.stdout:Mar 10 13:42:28 vm00 bash[21015]: debug 2026-03-10T13:42:28.040+0000 7f3db1f08140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member 2026-03-10T13:42:28.216 INFO:journalctl@ceph.mgr.a.vm00.stdout:Mar 10 13:42:28 vm00 bash[21015]: debug 2026-03-10T13:42:28.120+0000 7f3db1f08140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member 2026-03-10T13:42:28.508 INFO:journalctl@ceph.mgr.a.vm00.stdout:Mar 10 13:42:28 vm00 bash[21015]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode. 2026-03-10T13:42:28.508 INFO:journalctl@ceph.mgr.a.vm00.stdout:Mar 10 13:42:28 vm00 bash[21015]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve. 2026-03-10T13:42:28.508 INFO:journalctl@ceph.mgr.a.vm00.stdout:Mar 10 13:42:28 vm00 bash[21015]: from numpy import show_config as show_numpy_config 2026-03-10T13:42:28.508 INFO:journalctl@ceph.mgr.a.vm00.stdout:Mar 10 13:42:28 vm00 bash[21015]: debug 2026-03-10T13:42:28.244+0000 7f3db1f08140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member 2026-03-10T13:42:28.508 INFO:journalctl@ceph.mgr.a.vm00.stdout:Mar 10 13:42:28 vm00 bash[21015]: debug 2026-03-10T13:42:28.380+0000 7f3db1f08140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member 2026-03-10T13:42:28.509 INFO:journalctl@ceph.mgr.a.vm00.stdout:Mar 10 13:42:28 vm00 bash[21015]: debug 2026-03-10T13:42:28.420+0000 7f3db1f08140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member 2026-03-10T13:42:28.509 INFO:journalctl@ceph.mgr.a.vm00.stdout:Mar 10 13:42:28 vm00 bash[21015]: debug 2026-03-10T13:42:28.460+0000 7f3db1f08140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member 2026-03-10T13:42:28.966 INFO:journalctl@ceph.mgr.a.vm00.stdout:Mar 10 13:42:28 vm00 bash[21015]: debug 2026-03-10T13:42:28.504+0000 7f3db1f08140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member 2026-03-10T13:42:28.966 INFO:journalctl@ceph.mgr.a.vm00.stdout:Mar 10 13:42:28 vm00 bash[21015]: debug 2026-03-10T13:42:28.556+0000 7f3db1f08140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member 2026-03-10T13:42:29.265 INFO:journalctl@ceph.mgr.a.vm00.stdout:Mar 10 13:42:28 vm00 bash[21015]: debug 2026-03-10T13:42:28.988+0000 7f3db1f08140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member 2026-03-10T13:42:29.265 INFO:journalctl@ceph.mgr.a.vm00.stdout:Mar 10 13:42:29 vm00 bash[21015]: debug 2026-03-10T13:42:29.024+0000 7f3db1f08140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member 2026-03-10T13:42:29.265 INFO:journalctl@ceph.mgr.a.vm00.stdout:Mar 10 13:42:29 vm00 bash[21015]: debug 2026-03-10T13:42:29.064+0000 7f3db1f08140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member 2026-03-10T13:42:29.265 INFO:journalctl@ceph.mgr.a.vm00.stdout:Mar 10 13:42:29 vm00 bash[21015]: debug 2026-03-10T13:42:29.220+0000 7f3db1f08140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member 2026-03-10T13:42:29.265 INFO:journalctl@ceph.mgr.a.vm00.stdout:Mar 10 13:42:29 vm00 bash[21015]: debug 2026-03-10T13:42:29.260+0000 7f3db1f08140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member 2026-03-10T13:42:29.571 INFO:journalctl@ceph.mgr.a.vm00.stdout:Mar 10 13:42:29 vm00 bash[21015]: debug 2026-03-10T13:42:29.304+0000 7f3db1f08140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member 2026-03-10T13:42:29.571 INFO:journalctl@ceph.mgr.a.vm00.stdout:Mar 10 13:42:29 vm00 bash[21015]: debug 2026-03-10T13:42:29.416+0000 7f3db1f08140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member 2026-03-10T13:42:29.957 INFO:journalctl@ceph.mgr.a.vm00.stdout:Mar 10 13:42:29 vm00 bash[21015]: debug 2026-03-10T13:42:29.568+0000 7f3db1f08140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member 2026-03-10T13:42:29.957 INFO:journalctl@ceph.mgr.a.vm00.stdout:Mar 10 13:42:29 vm00 bash[21015]: debug 2026-03-10T13:42:29.736+0000 7f3db1f08140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member 2026-03-10T13:42:29.957 INFO:journalctl@ceph.mgr.a.vm00.stdout:Mar 10 13:42:29 vm00 bash[21015]: debug 2026-03-10T13:42:29.772+0000 7f3db1f08140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member 2026-03-10T13:42:29.957 INFO:journalctl@ceph.mgr.a.vm00.stdout:Mar 10 13:42:29 vm00 bash[21015]: debug 2026-03-10T13:42:29.812+0000 7f3db1f08140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member 2026-03-10T13:42:30.216 INFO:journalctl@ceph.mgr.a.vm00.stdout:Mar 10 13:42:29 vm00 bash[21015]: debug 2026-03-10T13:42:29.952+0000 7f3db1f08140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member 2026-03-10T13:42:30.216 INFO:journalctl@ceph.mgr.a.vm00.stdout:Mar 10 13:42:30 vm00 bash[21015]: debug 2026-03-10T13:42:30.184+0000 7f3db1f08140 -1 mgr[py] Module snap_schedule has missing NOTIFY_TYPES member 2026-03-10T13:42:30.716 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:30 vm00 bash[20748]: cluster 2026-03-10T13:42:30.190698+0000 mon.a (mon.0) 76 : cluster [INF] Active manager daemon a restarted 2026-03-10T13:42:30.716 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:30 vm00 bash[20748]: cluster 2026-03-10T13:42:30.190698+0000 mon.a (mon.0) 76 : cluster [INF] Active manager daemon a restarted 2026-03-10T13:42:30.716 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:30 vm00 bash[20748]: cluster 2026-03-10T13:42:30.190953+0000 mon.a (mon.0) 77 : cluster [INF] Activating manager daemon a 2026-03-10T13:42:30.716 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:30 vm00 bash[20748]: cluster 2026-03-10T13:42:30.190953+0000 mon.a (mon.0) 77 : cluster [INF] Activating manager daemon a 2026-03-10T13:42:30.716 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:30 vm00 bash[20748]: cluster 2026-03-10T13:42:30.195647+0000 mon.a (mon.0) 78 : cluster [DBG] osdmap e3: 0 total, 0 up, 0 in 2026-03-10T13:42:30.716 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:30 vm00 bash[20748]: cluster 2026-03-10T13:42:30.195647+0000 mon.a (mon.0) 78 : cluster [DBG] osdmap e3: 0 total, 0 up, 0 in 2026-03-10T13:42:30.716 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:30 vm00 bash[20748]: cluster 2026-03-10T13:42:30.195782+0000 mon.a (mon.0) 79 : cluster [DBG] mgrmap e9: a(active, starting, since 0.00494311s) 2026-03-10T13:42:30.716 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:30 vm00 bash[20748]: cluster 2026-03-10T13:42:30.195782+0000 mon.a (mon.0) 79 : cluster [DBG] mgrmap e9: a(active, starting, since 0.00494311s) 2026-03-10T13:42:30.717 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:30 vm00 bash[20748]: audit 2026-03-10T13:42:30.198012+0000 mon.a (mon.0) 80 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-10T13:42:30.717 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:30 vm00 bash[20748]: audit 2026-03-10T13:42:30.198012+0000 mon.a (mon.0) 80 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-10T13:42:30.717 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:30 vm00 bash[20748]: audit 2026-03-10T13:42:30.198780+0000 mon.a (mon.0) 81 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "mgr metadata", "who": "a", "id": "a"}]: dispatch 2026-03-10T13:42:30.717 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:30 vm00 bash[20748]: audit 2026-03-10T13:42:30.198780+0000 mon.a (mon.0) 81 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "mgr metadata", "who": "a", "id": "a"}]: dispatch 2026-03-10T13:42:30.717 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:30 vm00 bash[20748]: audit 2026-03-10T13:42:30.199459+0000 mon.a (mon.0) 82 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "mds metadata"}]: dispatch 2026-03-10T13:42:30.717 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:30 vm00 bash[20748]: audit 2026-03-10T13:42:30.199459+0000 mon.a (mon.0) 82 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "mds metadata"}]: dispatch 2026-03-10T13:42:30.717 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:30 vm00 bash[20748]: audit 2026-03-10T13:42:30.199564+0000 mon.a (mon.0) 83 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "osd metadata"}]: dispatch 2026-03-10T13:42:30.717 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:30 vm00 bash[20748]: audit 2026-03-10T13:42:30.199564+0000 mon.a (mon.0) 83 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "osd metadata"}]: dispatch 2026-03-10T13:42:30.717 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:30 vm00 bash[20748]: audit 2026-03-10T13:42:30.199657+0000 mon.a (mon.0) 84 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "mon metadata"}]: dispatch 2026-03-10T13:42:30.717 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:30 vm00 bash[20748]: audit 2026-03-10T13:42:30.199657+0000 mon.a (mon.0) 84 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "mon metadata"}]: dispatch 2026-03-10T13:42:30.717 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:30 vm00 bash[20748]: cluster 2026-03-10T13:42:30.204993+0000 mon.a (mon.0) 85 : cluster [INF] Manager daemon a is now available 2026-03-10T13:42:30.717 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:30 vm00 bash[20748]: cluster 2026-03-10T13:42:30.204993+0000 mon.a (mon.0) 85 : cluster [INF] Manager daemon a is now available 2026-03-10T13:42:30.717 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:30 vm00 bash[20748]: audit 2026-03-10T13:42:30.220832+0000 mon.a (mon.0) 86 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/a/mirror_snapshot_schedule"}]: dispatch 2026-03-10T13:42:30.717 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:30 vm00 bash[20748]: audit 2026-03-10T13:42:30.220832+0000 mon.a (mon.0) 86 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/a/mirror_snapshot_schedule"}]: dispatch 2026-03-10T13:42:30.717 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:30 vm00 bash[20748]: audit 2026-03-10T13:42:30.223605+0000 mon.a (mon.0) 87 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T13:42:30.717 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:30 vm00 bash[20748]: audit 2026-03-10T13:42:30.223605+0000 mon.a (mon.0) 87 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T13:42:30.717 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:30 vm00 bash[20748]: audit 2026-03-10T13:42:30.233988+0000 mon.a (mon.0) 88 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/a/trash_purge_schedule"}]: dispatch 2026-03-10T13:42:30.717 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:30 vm00 bash[20748]: audit 2026-03-10T13:42:30.233988+0000 mon.a (mon.0) 88 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/a/trash_purge_schedule"}]: dispatch 2026-03-10T13:42:31.238 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout { 2026-03-10T13:42:31.238 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "mgrmap_epoch": 10, 2026-03-10T13:42:31.238 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout "initialized": true 2026-03-10T13:42:31.238 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout } 2026-03-10T13:42:31.238 INFO:teuthology.orchestra.run.vm00.stdout:mgr epoch 8 is available 2026-03-10T13:42:31.238 INFO:teuthology.orchestra.run.vm00.stdout:Generating a dashboard self-signed certificate... 2026-03-10T13:42:31.535 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout Self-signed certificate created 2026-03-10T13:42:31.535 INFO:teuthology.orchestra.run.vm00.stdout:Creating initial admin user... 2026-03-10T13:42:31.958 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout {"username": "admin", "password": "$2b$12$qNGuQdygNydclSVA0dcyDuGz5bMLAMOKA5jplFzZi9K2V9/iKuW76", "roles": ["administrator"], "name": null, "email": null, "lastUpdate": 1773150151, "enabled": true, "pwdExpirationDate": null, "pwdUpdateRequired": true} 2026-03-10T13:42:31.958 INFO:teuthology.orchestra.run.vm00.stdout:Fetching dashboard port number... 2026-03-10T13:42:32.221 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stdout 8443 2026-03-10T13:42:32.221 INFO:teuthology.orchestra.run.vm00.stdout:firewalld does not appear to be present 2026-03-10T13:42:32.221 INFO:teuthology.orchestra.run.vm00.stdout:Not possible to open ports <[8443]>. firewalld.service is not available 2026-03-10T13:42:32.222 INFO:teuthology.orchestra.run.vm00.stdout:Ceph Dashboard is now available at: 2026-03-10T13:42:32.222 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-10T13:42:32.222 INFO:teuthology.orchestra.run.vm00.stdout: URL: https://vm00.local:8443/ 2026-03-10T13:42:32.222 INFO:teuthology.orchestra.run.vm00.stdout: User: admin 2026-03-10T13:42:32.222 INFO:teuthology.orchestra.run.vm00.stdout: Password: 6jfgm0lc4m 2026-03-10T13:42:32.222 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-10T13:42:32.222 INFO:teuthology.orchestra.run.vm00.stdout:Saving cluster configuration to /var/lib/ceph/c9620084-1c86-11f1-bcc5-e3fb709eab0a/config directory 2026-03-10T13:42:32.466 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:32 vm00 bash[20748]: cephadm 2026-03-10T13:42:31.136510+0000 mgr.a (mgr.14150) 1 : cephadm [INF] [10/Mar/2026:13:42:31] ENGINE Bus STARTING 2026-03-10T13:42:32.466 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:32 vm00 bash[20748]: cephadm 2026-03-10T13:42:31.136510+0000 mgr.a (mgr.14150) 1 : cephadm [INF] [10/Mar/2026:13:42:31] ENGINE Bus STARTING 2026-03-10T13:42:32.466 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:32 vm00 bash[20748]: cluster 2026-03-10T13:42:31.199043+0000 mon.a (mon.0) 89 : cluster [DBG] mgrmap e10: a(active, since 1.0082s) 2026-03-10T13:42:32.466 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:32 vm00 bash[20748]: cluster 2026-03-10T13:42:31.199043+0000 mon.a (mon.0) 89 : cluster [DBG] mgrmap e10: a(active, since 1.0082s) 2026-03-10T13:42:32.466 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:32 vm00 bash[20748]: audit 2026-03-10T13:42:31.201124+0000 mgr.a (mgr.14150) 2 : audit [DBG] from='client.14154 -' entity='client.admin' cmd=[{"prefix": "get_command_descriptions"}]: dispatch 2026-03-10T13:42:32.467 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:32 vm00 bash[20748]: audit 2026-03-10T13:42:31.201124+0000 mgr.a (mgr.14150) 2 : audit [DBG] from='client.14154 -' entity='client.admin' cmd=[{"prefix": "get_command_descriptions"}]: dispatch 2026-03-10T13:42:32.467 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:32 vm00 bash[20748]: audit 2026-03-10T13:42:31.204773+0000 mgr.a (mgr.14150) 3 : audit [DBG] from='client.14154 -' entity='client.admin' cmd=[{"prefix": "mgr_status"}]: dispatch 2026-03-10T13:42:32.467 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:32 vm00 bash[20748]: audit 2026-03-10T13:42:31.204773+0000 mgr.a (mgr.14150) 3 : audit [DBG] from='client.14154 -' entity='client.admin' cmd=[{"prefix": "mgr_status"}]: dispatch 2026-03-10T13:42:32.467 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:32 vm00 bash[20748]: cephadm 2026-03-10T13:42:31.244346+0000 mgr.a (mgr.14150) 4 : cephadm [INF] [10/Mar/2026:13:42:31] ENGINE Serving on https://192.168.123.100:7150 2026-03-10T13:42:32.467 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:32 vm00 bash[20748]: cephadm 2026-03-10T13:42:31.244346+0000 mgr.a (mgr.14150) 4 : cephadm [INF] [10/Mar/2026:13:42:31] ENGINE Serving on https://192.168.123.100:7150 2026-03-10T13:42:32.467 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:32 vm00 bash[20748]: cephadm 2026-03-10T13:42:31.246624+0000 mgr.a (mgr.14150) 5 : cephadm [INF] [10/Mar/2026:13:42:31] ENGINE Client ('192.168.123.100', 51908) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)') 2026-03-10T13:42:32.467 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:32 vm00 bash[20748]: cephadm 2026-03-10T13:42:31.246624+0000 mgr.a (mgr.14150) 5 : cephadm [INF] [10/Mar/2026:13:42:31] ENGINE Client ('192.168.123.100', 51908) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)') 2026-03-10T13:42:32.467 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:32 vm00 bash[20748]: cephadm 2026-03-10T13:42:31.246774+0000 mgr.a (mgr.14150) 6 : cephadm [INF] [10/Mar/2026:13:42:31] ENGINE Serving on http://192.168.123.100:8765 2026-03-10T13:42:32.467 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:32 vm00 bash[20748]: cephadm 2026-03-10T13:42:31.246774+0000 mgr.a (mgr.14150) 6 : cephadm [INF] [10/Mar/2026:13:42:31] ENGINE Serving on http://192.168.123.100:8765 2026-03-10T13:42:32.467 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:32 vm00 bash[20748]: cephadm 2026-03-10T13:42:31.247014+0000 mgr.a (mgr.14150) 7 : cephadm [INF] [10/Mar/2026:13:42:31] ENGINE Bus STARTED 2026-03-10T13:42:32.467 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:32 vm00 bash[20748]: cephadm 2026-03-10T13:42:31.247014+0000 mgr.a (mgr.14150) 7 : cephadm [INF] [10/Mar/2026:13:42:31] ENGINE Bus STARTED 2026-03-10T13:42:32.467 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:32 vm00 bash[20748]: audit 2026-03-10T13:42:31.463525+0000 mgr.a (mgr.14150) 8 : audit [DBG] from='client.14162 -' entity='client.admin' cmd=[{"prefix": "dashboard create-self-signed-cert", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T13:42:32.467 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:32 vm00 bash[20748]: audit 2026-03-10T13:42:31.463525+0000 mgr.a (mgr.14150) 8 : audit [DBG] from='client.14162 -' entity='client.admin' cmd=[{"prefix": "dashboard create-self-signed-cert", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T13:42:32.467 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:32 vm00 bash[20748]: audit 2026-03-10T13:42:31.498941+0000 mon.a (mon.0) 90 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:42:32.467 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:32 vm00 bash[20748]: audit 2026-03-10T13:42:31.498941+0000 mon.a (mon.0) 90 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:42:32.467 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:32 vm00 bash[20748]: audit 2026-03-10T13:42:31.501363+0000 mon.a (mon.0) 91 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:42:32.467 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:32 vm00 bash[20748]: audit 2026-03-10T13:42:31.501363+0000 mon.a (mon.0) 91 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:42:32.467 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:32 vm00 bash[20748]: audit 2026-03-10T13:42:31.771492+0000 mgr.a (mgr.14150) 9 : audit [DBG] from='client.14164 -' entity='client.admin' cmd=[{"prefix": "dashboard ac-user-create", "username": "admin", "rolename": "administrator", "force_password": true, "pwd_update_required": true, "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T13:42:32.467 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:32 vm00 bash[20748]: audit 2026-03-10T13:42:31.771492+0000 mgr.a (mgr.14150) 9 : audit [DBG] from='client.14164 -' entity='client.admin' cmd=[{"prefix": "dashboard ac-user-create", "username": "admin", "rolename": "administrator", "force_password": true, "pwd_update_required": true, "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T13:42:32.467 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:32 vm00 bash[20748]: audit 2026-03-10T13:42:31.922633+0000 mon.a (mon.0) 92 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:42:32.467 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:32 vm00 bash[20748]: audit 2026-03-10T13:42:31.922633+0000 mon.a (mon.0) 92 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:42:32.467 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:32 vm00 bash[20748]: audit 2026-03-10T13:42:32.174796+0000 mon.a (mon.0) 93 : audit [DBG] from='client.? 192.168.123.100:0/1131920560' entity='client.admin' cmd=[{"prefix": "config get", "who": "mgr", "key": "mgr/dashboard/ssl_server_port"}]: dispatch 2026-03-10T13:42:32.467 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:32 vm00 bash[20748]: audit 2026-03-10T13:42:32.174796+0000 mon.a (mon.0) 93 : audit [DBG] from='client.? 192.168.123.100:0/1131920560' entity='client.admin' cmd=[{"prefix": "config get", "who": "mgr", "key": "mgr/dashboard/ssl_server_port"}]: dispatch 2026-03-10T13:42:32.536 INFO:teuthology.orchestra.run.vm00.stdout:/usr/bin/ceph: stderr set mgr/dashboard/cluster/status 2026-03-10T13:42:32.537 INFO:teuthology.orchestra.run.vm00.stdout:You can access the Ceph CLI as following in case of multi-cluster or non-default config: 2026-03-10T13:42:32.537 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-10T13:42:32.537 INFO:teuthology.orchestra.run.vm00.stdout: sudo /home/ubuntu/cephtest/cephadm shell --fsid c9620084-1c86-11f1-bcc5-e3fb709eab0a -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring 2026-03-10T13:42:32.537 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-10T13:42:32.537 INFO:teuthology.orchestra.run.vm00.stdout:Or, if you are only running a single cluster on this host: 2026-03-10T13:42:32.537 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-10T13:42:32.537 INFO:teuthology.orchestra.run.vm00.stdout: sudo /home/ubuntu/cephtest/cephadm shell 2026-03-10T13:42:32.537 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-10T13:42:32.537 INFO:teuthology.orchestra.run.vm00.stdout:Please consider enabling telemetry to help improve Ceph: 2026-03-10T13:42:32.537 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-10T13:42:32.537 INFO:teuthology.orchestra.run.vm00.stdout: ceph telemetry on 2026-03-10T13:42:32.537 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-10T13:42:32.537 INFO:teuthology.orchestra.run.vm00.stdout:For more information see: 2026-03-10T13:42:32.537 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-10T13:42:32.537 INFO:teuthology.orchestra.run.vm00.stdout: https://docs.ceph.com/en/latest/mgr/telemetry/ 2026-03-10T13:42:32.537 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-10T13:42:32.537 INFO:teuthology.orchestra.run.vm00.stdout:Bootstrap complete. 2026-03-10T13:42:32.561 INFO:tasks.cephadm:Fetching config... 2026-03-10T13:42:32.561 DEBUG:teuthology.orchestra.run.vm00:> set -ex 2026-03-10T13:42:32.561 DEBUG:teuthology.orchestra.run.vm00:> dd if=/etc/ceph/ceph.conf of=/dev/stdout 2026-03-10T13:42:32.563 INFO:tasks.cephadm:Fetching client.admin keyring... 2026-03-10T13:42:32.563 DEBUG:teuthology.orchestra.run.vm00:> set -ex 2026-03-10T13:42:32.563 DEBUG:teuthology.orchestra.run.vm00:> dd if=/etc/ceph/ceph.client.admin.keyring of=/dev/stdout 2026-03-10T13:42:32.608 INFO:tasks.cephadm:Fetching mon keyring... 2026-03-10T13:42:32.608 DEBUG:teuthology.orchestra.run.vm00:> set -ex 2026-03-10T13:42:32.608 DEBUG:teuthology.orchestra.run.vm00:> sudo dd if=/var/lib/ceph/c9620084-1c86-11f1-bcc5-e3fb709eab0a/mon.a/keyring of=/dev/stdout 2026-03-10T13:42:32.656 INFO:tasks.cephadm:Fetching pub ssh key... 2026-03-10T13:42:32.656 DEBUG:teuthology.orchestra.run.vm00:> set -ex 2026-03-10T13:42:32.656 DEBUG:teuthology.orchestra.run.vm00:> dd if=/home/ubuntu/cephtest/ceph.pub of=/dev/stdout 2026-03-10T13:42:32.700 INFO:tasks.cephadm:Installing pub ssh key for root users... 2026-03-10T13:42:32.700 DEBUG:teuthology.orchestra.run.vm00:> sudo install -d -m 0700 /root/.ssh && echo 'ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIH8Ic0mYSIpAp6XC4fxwU+GL3nEG/NBEdqJfshZrn0KN ceph-c9620084-1c86-11f1-bcc5-e3fb709eab0a' | sudo tee -a /root/.ssh/authorized_keys && sudo chmod 0600 /root/.ssh/authorized_keys 2026-03-10T13:42:32.752 INFO:teuthology.orchestra.run.vm00.stdout:ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIH8Ic0mYSIpAp6XC4fxwU+GL3nEG/NBEdqJfshZrn0KN ceph-c9620084-1c86-11f1-bcc5-e3fb709eab0a 2026-03-10T13:42:32.757 DEBUG:teuthology.orchestra.run.vm07:> sudo install -d -m 0700 /root/.ssh && echo 'ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIH8Ic0mYSIpAp6XC4fxwU+GL3nEG/NBEdqJfshZrn0KN ceph-c9620084-1c86-11f1-bcc5-e3fb709eab0a' | sudo tee -a /root/.ssh/authorized_keys && sudo chmod 0600 /root/.ssh/authorized_keys 2026-03-10T13:42:32.769 INFO:teuthology.orchestra.run.vm07.stdout:ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIH8Ic0mYSIpAp6XC4fxwU+GL3nEG/NBEdqJfshZrn0KN ceph-c9620084-1c86-11f1-bcc5-e3fb709eab0a 2026-03-10T13:42:32.775 DEBUG:teuthology.orchestra.run.vm08:> sudo install -d -m 0700 /root/.ssh && echo 'ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIH8Ic0mYSIpAp6XC4fxwU+GL3nEG/NBEdqJfshZrn0KN ceph-c9620084-1c86-11f1-bcc5-e3fb709eab0a' | sudo tee -a /root/.ssh/authorized_keys && sudo chmod 0600 /root/.ssh/authorized_keys 2026-03-10T13:42:32.787 INFO:teuthology.orchestra.run.vm08.stdout:ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIH8Ic0mYSIpAp6XC4fxwU+GL3nEG/NBEdqJfshZrn0KN ceph-c9620084-1c86-11f1-bcc5-e3fb709eab0a 2026-03-10T13:42:32.791 DEBUG:teuthology.orchestra.run.vm00:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid c9620084-1c86-11f1-bcc5-e3fb709eab0a -- ceph config set mgr mgr/cephadm/allow_ptrace true 2026-03-10T13:42:33.966 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:33 vm00 bash[20748]: audit 2026-03-10T13:42:32.497645+0000 mon.a (mon.0) 94 : audit [INF] from='client.? 192.168.123.100:0/3905016985' entity='client.admin' 2026-03-10T13:42:33.966 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:33 vm00 bash[20748]: audit 2026-03-10T13:42:32.497645+0000 mon.a (mon.0) 94 : audit [INF] from='client.? 192.168.123.100:0/3905016985' entity='client.admin' 2026-03-10T13:42:33.966 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:33 vm00 bash[20748]: cluster 2026-03-10T13:42:32.926535+0000 mon.a (mon.0) 95 : cluster [DBG] mgrmap e11: a(active, since 2s) 2026-03-10T13:42:33.966 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:33 vm00 bash[20748]: cluster 2026-03-10T13:42:32.926535+0000 mon.a (mon.0) 95 : cluster [DBG] mgrmap e11: a(active, since 2s) 2026-03-10T13:42:36.466 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:36 vm00 bash[20748]: audit 2026-03-10T13:42:35.186165+0000 mon.a (mon.0) 96 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:42:36.466 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:36 vm00 bash[20748]: audit 2026-03-10T13:42:35.186165+0000 mon.a (mon.0) 96 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:42:36.466 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:36 vm00 bash[20748]: audit 2026-03-10T13:42:35.744748+0000 mon.a (mon.0) 97 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:42:36.466 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:36 vm00 bash[20748]: audit 2026-03-10T13:42:35.744748+0000 mon.a (mon.0) 97 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:42:37.049 INFO:teuthology.orchestra.run.vm00.stderr:Inferring config /var/lib/ceph/c9620084-1c86-11f1-bcc5-e3fb709eab0a/mon.a/config 2026-03-10T13:42:37.366 INFO:tasks.cephadm:Distributing conf and client.admin keyring to all hosts + 0755 2026-03-10T13:42:37.366 DEBUG:teuthology.orchestra.run.vm00:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid c9620084-1c86-11f1-bcc5-e3fb709eab0a -- ceph orch client-keyring set client.admin '*' --mode 0755 2026-03-10T13:42:38.466 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:38 vm00 bash[20748]: cluster 2026-03-10T13:42:37.190410+0000 mon.a (mon.0) 98 : cluster [DBG] mgrmap e12: a(active, since 6s) 2026-03-10T13:42:38.466 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:38 vm00 bash[20748]: cluster 2026-03-10T13:42:37.190410+0000 mon.a (mon.0) 98 : cluster [DBG] mgrmap e12: a(active, since 6s) 2026-03-10T13:42:38.466 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:38 vm00 bash[20748]: audit 2026-03-10T13:42:37.311115+0000 mon.a (mon.0) 99 : audit [INF] from='client.? 192.168.123.100:0/1747826160' entity='client.admin' 2026-03-10T13:42:38.466 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:38 vm00 bash[20748]: audit 2026-03-10T13:42:37.311115+0000 mon.a (mon.0) 99 : audit [INF] from='client.? 192.168.123.100:0/1747826160' entity='client.admin' 2026-03-10T13:42:41.062 INFO:teuthology.orchestra.run.vm00.stderr:Inferring config /var/lib/ceph/c9620084-1c86-11f1-bcc5-e3fb709eab0a/mon.a/config 2026-03-10T13:42:41.370 INFO:tasks.cephadm:Writing (initial) conf and keyring to vm07 2026-03-10T13:42:41.370 DEBUG:teuthology.orchestra.run.vm07:> set -ex 2026-03-10T13:42:41.370 DEBUG:teuthology.orchestra.run.vm07:> dd of=/etc/ceph/ceph.conf 2026-03-10T13:42:41.373 DEBUG:teuthology.orchestra.run.vm07:> set -ex 2026-03-10T13:42:41.373 DEBUG:teuthology.orchestra.run.vm07:> dd of=/etc/ceph/ceph.client.admin.keyring 2026-03-10T13:42:41.419 INFO:tasks.cephadm:Adding host vm07 to orchestrator... 2026-03-10T13:42:41.419 DEBUG:teuthology.orchestra.run.vm00:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid c9620084-1c86-11f1-bcc5-e3fb709eab0a -- ceph orch host add vm07 2026-03-10T13:42:42.716 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:42 vm00 bash[20748]: audit 2026-03-10T13:42:41.315246+0000 mgr.a (mgr.14150) 10 : audit [DBG] from='client.14172 -' entity='client.admin' cmd=[{"prefix": "orch client-keyring set", "entity": "client.admin", "placement": "*", "mode": "0755", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T13:42:42.716 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:42 vm00 bash[20748]: audit 2026-03-10T13:42:41.315246+0000 mgr.a (mgr.14150) 10 : audit [DBG] from='client.14172 -' entity='client.admin' cmd=[{"prefix": "orch client-keyring set", "entity": "client.admin", "placement": "*", "mode": "0755", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T13:42:42.716 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:42 vm00 bash[20748]: audit 2026-03-10T13:42:41.318139+0000 mon.a (mon.0) 100 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:42:42.716 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:42 vm00 bash[20748]: audit 2026-03-10T13:42:41.318139+0000 mon.a (mon.0) 100 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:42:42.716 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:42 vm00 bash[20748]: audit 2026-03-10T13:42:41.854952+0000 mon.a (mon.0) 101 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:42:42.716 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:42 vm00 bash[20748]: audit 2026-03-10T13:42:41.854952+0000 mon.a (mon.0) 101 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:42:42.716 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:42 vm00 bash[20748]: audit 2026-03-10T13:42:41.857035+0000 mon.a (mon.0) 102 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:42:42.717 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:42 vm00 bash[20748]: audit 2026-03-10T13:42:41.857035+0000 mon.a (mon.0) 102 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:42:42.717 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:42 vm00 bash[20748]: audit 2026-03-10T13:42:41.857616+0000 mon.a (mon.0) 103 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "config rm", "who": "osd/host:vm00", "name": "osd_memory_target"}]: dispatch 2026-03-10T13:42:42.717 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:42 vm00 bash[20748]: audit 2026-03-10T13:42:41.857616+0000 mon.a (mon.0) 103 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "config rm", "who": "osd/host:vm00", "name": "osd_memory_target"}]: dispatch 2026-03-10T13:42:42.717 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:42 vm00 bash[20748]: audit 2026-03-10T13:42:41.858418+0000 mon.a (mon.0) 104 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T13:42:42.717 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:42 vm00 bash[20748]: audit 2026-03-10T13:42:41.858418+0000 mon.a (mon.0) 104 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T13:42:42.717 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:42 vm00 bash[20748]: audit 2026-03-10T13:42:41.858992+0000 mon.a (mon.0) 105 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T13:42:42.717 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:42 vm00 bash[20748]: audit 2026-03-10T13:42:41.858992+0000 mon.a (mon.0) 105 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T13:42:42.717 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:42 vm00 bash[20748]: cephadm 2026-03-10T13:42:41.859760+0000 mgr.a (mgr.14150) 11 : cephadm [INF] Updating vm00:/etc/ceph/ceph.conf 2026-03-10T13:42:42.717 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:42 vm00 bash[20748]: cephadm 2026-03-10T13:42:41.859760+0000 mgr.a (mgr.14150) 11 : cephadm [INF] Updating vm00:/etc/ceph/ceph.conf 2026-03-10T13:42:42.717 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:42 vm00 bash[20748]: cephadm 2026-03-10T13:42:41.888501+0000 mgr.a (mgr.14150) 12 : cephadm [INF] Updating vm00:/var/lib/ceph/c9620084-1c86-11f1-bcc5-e3fb709eab0a/config/ceph.conf 2026-03-10T13:42:42.717 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:42 vm00 bash[20748]: cephadm 2026-03-10T13:42:41.888501+0000 mgr.a (mgr.14150) 12 : cephadm [INF] Updating vm00:/var/lib/ceph/c9620084-1c86-11f1-bcc5-e3fb709eab0a/config/ceph.conf 2026-03-10T13:42:42.717 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:42 vm00 bash[20748]: cephadm 2026-03-10T13:42:41.916024+0000 mgr.a (mgr.14150) 13 : cephadm [INF] Updating vm00:/etc/ceph/ceph.client.admin.keyring 2026-03-10T13:42:42.717 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:42 vm00 bash[20748]: cephadm 2026-03-10T13:42:41.916024+0000 mgr.a (mgr.14150) 13 : cephadm [INF] Updating vm00:/etc/ceph/ceph.client.admin.keyring 2026-03-10T13:42:42.717 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:42 vm00 bash[20748]: cephadm 2026-03-10T13:42:41.945502+0000 mgr.a (mgr.14150) 14 : cephadm [INF] Updating vm00:/var/lib/ceph/c9620084-1c86-11f1-bcc5-e3fb709eab0a/config/ceph.client.admin.keyring 2026-03-10T13:42:42.717 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:42 vm00 bash[20748]: cephadm 2026-03-10T13:42:41.945502+0000 mgr.a (mgr.14150) 14 : cephadm [INF] Updating vm00:/var/lib/ceph/c9620084-1c86-11f1-bcc5-e3fb709eab0a/config/ceph.client.admin.keyring 2026-03-10T13:42:42.717 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:42 vm00 bash[20748]: audit 2026-03-10T13:42:41.978435+0000 mon.a (mon.0) 106 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:42:42.717 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:42 vm00 bash[20748]: audit 2026-03-10T13:42:41.978435+0000 mon.a (mon.0) 106 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:42:42.717 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:42 vm00 bash[20748]: audit 2026-03-10T13:42:41.980695+0000 mon.a (mon.0) 107 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:42:42.717 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:42 vm00 bash[20748]: audit 2026-03-10T13:42:41.980695+0000 mon.a (mon.0) 107 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:42:42.717 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:42 vm00 bash[20748]: audit 2026-03-10T13:42:41.982772+0000 mon.a (mon.0) 108 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:42:42.717 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:42 vm00 bash[20748]: audit 2026-03-10T13:42:41.982772+0000 mon.a (mon.0) 108 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:42:42.717 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:42 vm00 bash[20748]: audit 2026-03-10T13:42:41.987647+0000 mon.a (mon.0) 109 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T13:42:42.717 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:42 vm00 bash[20748]: audit 2026-03-10T13:42:41.987647+0000 mon.a (mon.0) 109 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T13:42:42.717 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:42 vm00 bash[20748]: audit 2026-03-10T13:42:41.988592+0000 mon.a (mon.0) 110 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T13:42:42.717 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:42 vm00 bash[20748]: audit 2026-03-10T13:42:41.988592+0000 mon.a (mon.0) 110 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T13:42:42.717 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:42 vm00 bash[20748]: audit 2026-03-10T13:42:41.989181+0000 mon.a (mon.0) 111 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T13:42:42.717 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:42 vm00 bash[20748]: audit 2026-03-10T13:42:41.989181+0000 mon.a (mon.0) 111 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T13:42:42.717 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:42 vm00 bash[20748]: audit 2026-03-10T13:42:41.991443+0000 mon.a (mon.0) 112 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:42:42.717 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:42 vm00 bash[20748]: audit 2026-03-10T13:42:41.991443+0000 mon.a (mon.0) 112 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:42:45.069 INFO:teuthology.orchestra.run.vm00.stderr:Inferring config /var/lib/ceph/c9620084-1c86-11f1-bcc5-e3fb709eab0a/mon.a/config 2026-03-10T13:42:46.466 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:46 vm00 bash[20748]: audit 2026-03-10T13:42:45.365661+0000 mgr.a (mgr.14150) 15 : audit [DBG] from='client.14174 -' entity='client.admin' cmd=[{"prefix": "orch host add", "hostname": "vm07", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T13:42:46.466 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:46 vm00 bash[20748]: audit 2026-03-10T13:42:45.365661+0000 mgr.a (mgr.14150) 15 : audit [DBG] from='client.14174 -' entity='client.admin' cmd=[{"prefix": "orch host add", "hostname": "vm07", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T13:42:46.466 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:46 vm00 bash[20748]: cephadm 2026-03-10T13:42:45.912254+0000 mgr.a (mgr.14150) 16 : cephadm [INF] Deploying cephadm binary to vm07 2026-03-10T13:42:46.466 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:46 vm00 bash[20748]: cephadm 2026-03-10T13:42:45.912254+0000 mgr.a (mgr.14150) 16 : cephadm [INF] Deploying cephadm binary to vm07 2026-03-10T13:42:47.129 INFO:teuthology.orchestra.run.vm00.stdout:Added host 'vm07' with addr '192.168.123.107' 2026-03-10T13:42:47.193 DEBUG:teuthology.orchestra.run.vm00:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid c9620084-1c86-11f1-bcc5-e3fb709eab0a -- ceph orch host ls --format=json 2026-03-10T13:42:48.466 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:48 vm00 bash[20748]: audit 2026-03-10T13:42:47.128661+0000 mon.a (mon.0) 113 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:42:48.466 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:48 vm00 bash[20748]: audit 2026-03-10T13:42:47.128661+0000 mon.a (mon.0) 113 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:42:48.466 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:48 vm00 bash[20748]: cephadm 2026-03-10T13:42:47.129218+0000 mgr.a (mgr.14150) 17 : cephadm [INF] Added host vm07 2026-03-10T13:42:48.466 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:48 vm00 bash[20748]: cephadm 2026-03-10T13:42:47.129218+0000 mgr.a (mgr.14150) 17 : cephadm [INF] Added host vm07 2026-03-10T13:42:48.466 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:48 vm00 bash[20748]: audit 2026-03-10T13:42:47.129587+0000 mon.a (mon.0) 114 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T13:42:48.466 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:48 vm00 bash[20748]: audit 2026-03-10T13:42:47.129587+0000 mon.a (mon.0) 114 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T13:42:48.466 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:48 vm00 bash[20748]: audit 2026-03-10T13:42:47.420386+0000 mon.a (mon.0) 115 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:42:48.466 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:48 vm00 bash[20748]: audit 2026-03-10T13:42:47.420386+0000 mon.a (mon.0) 115 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:42:49.966 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:49 vm00 bash[20748]: audit 2026-03-10T13:42:48.692533+0000 mon.a (mon.0) 116 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:42:49.966 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:49 vm00 bash[20748]: audit 2026-03-10T13:42:48.692533+0000 mon.a (mon.0) 116 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:42:49.966 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:49 vm00 bash[20748]: audit 2026-03-10T13:42:49.236571+0000 mon.a (mon.0) 117 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:42:49.966 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:49 vm00 bash[20748]: audit 2026-03-10T13:42:49.236571+0000 mon.a (mon.0) 117 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:42:51.716 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:51 vm00 bash[20748]: cluster 2026-03-10T13:42:50.200477+0000 mgr.a (mgr.14150) 18 : cluster [DBG] pgmap v3: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T13:42:51.716 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:51 vm00 bash[20748]: cluster 2026-03-10T13:42:50.200477+0000 mgr.a (mgr.14150) 18 : cluster [DBG] pgmap v3: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T13:42:51.805 INFO:teuthology.orchestra.run.vm00.stderr:Inferring config /var/lib/ceph/c9620084-1c86-11f1-bcc5-e3fb709eab0a/mon.a/config 2026-03-10T13:42:52.935 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-10T13:42:52.935 INFO:teuthology.orchestra.run.vm00.stdout:[{"addr": "192.168.123.100", "hostname": "vm00", "labels": [], "status": ""}, {"addr": "192.168.123.107", "hostname": "vm07", "labels": [], "status": ""}] 2026-03-10T13:42:53.020 INFO:tasks.cephadm:Writing (initial) conf and keyring to vm08 2026-03-10T13:42:53.020 DEBUG:teuthology.orchestra.run.vm08:> set -ex 2026-03-10T13:42:53.020 DEBUG:teuthology.orchestra.run.vm08:> dd of=/etc/ceph/ceph.conf 2026-03-10T13:42:53.024 DEBUG:teuthology.orchestra.run.vm08:> set -ex 2026-03-10T13:42:53.024 DEBUG:teuthology.orchestra.run.vm08:> dd of=/etc/ceph/ceph.client.admin.keyring 2026-03-10T13:42:53.069 INFO:tasks.cephadm:Adding host vm08 to orchestrator... 2026-03-10T13:42:53.069 DEBUG:teuthology.orchestra.run.vm00:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid c9620084-1c86-11f1-bcc5-e3fb709eab0a -- ceph orch host add vm08 2026-03-10T13:42:53.652 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:53 vm00 bash[20748]: cluster 2026-03-10T13:42:52.200649+0000 mgr.a (mgr.14150) 19 : cluster [DBG] pgmap v4: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T13:42:53.652 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:53 vm00 bash[20748]: cluster 2026-03-10T13:42:52.200649+0000 mgr.a (mgr.14150) 19 : cluster [DBG] pgmap v4: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T13:42:53.653 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:53 vm00 bash[20748]: audit 2026-03-10T13:42:52.650972+0000 mon.a (mon.0) 118 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:42:53.653 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:53 vm00 bash[20748]: audit 2026-03-10T13:42:52.650972+0000 mon.a (mon.0) 118 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:42:53.653 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:53 vm00 bash[20748]: audit 2026-03-10T13:42:52.655554+0000 mon.a (mon.0) 119 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:42:53.653 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:53 vm00 bash[20748]: audit 2026-03-10T13:42:52.655554+0000 mon.a (mon.0) 119 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:42:53.653 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:53 vm00 bash[20748]: audit 2026-03-10T13:42:52.660012+0000 mon.a (mon.0) 120 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:42:53.653 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:53 vm00 bash[20748]: audit 2026-03-10T13:42:52.660012+0000 mon.a (mon.0) 120 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:42:53.653 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:53 vm00 bash[20748]: audit 2026-03-10T13:42:52.663245+0000 mon.a (mon.0) 121 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:42:53.653 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:53 vm00 bash[20748]: audit 2026-03-10T13:42:52.663245+0000 mon.a (mon.0) 121 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:42:53.653 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:53 vm00 bash[20748]: audit 2026-03-10T13:42:52.663931+0000 mon.a (mon.0) 122 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "config rm", "who": "osd/host:vm07", "name": "osd_memory_target"}]: dispatch 2026-03-10T13:42:53.653 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:53 vm00 bash[20748]: audit 2026-03-10T13:42:52.663931+0000 mon.a (mon.0) 122 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "config rm", "who": "osd/host:vm07", "name": "osd_memory_target"}]: dispatch 2026-03-10T13:42:53.653 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:53 vm00 bash[20748]: audit 2026-03-10T13:42:52.664567+0000 mon.a (mon.0) 123 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T13:42:53.653 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:53 vm00 bash[20748]: audit 2026-03-10T13:42:52.664567+0000 mon.a (mon.0) 123 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T13:42:53.653 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:53 vm00 bash[20748]: audit 2026-03-10T13:42:52.664934+0000 mon.a (mon.0) 124 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T13:42:53.653 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:53 vm00 bash[20748]: audit 2026-03-10T13:42:52.664934+0000 mon.a (mon.0) 124 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T13:42:53.653 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:53 vm00 bash[20748]: cephadm 2026-03-10T13:42:52.665512+0000 mgr.a (mgr.14150) 20 : cephadm [INF] Updating vm07:/etc/ceph/ceph.conf 2026-03-10T13:42:53.653 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:53 vm00 bash[20748]: cephadm 2026-03-10T13:42:52.665512+0000 mgr.a (mgr.14150) 20 : cephadm [INF] Updating vm07:/etc/ceph/ceph.conf 2026-03-10T13:42:53.653 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:53 vm00 bash[20748]: cephadm 2026-03-10T13:42:52.708110+0000 mgr.a (mgr.14150) 21 : cephadm [INF] Updating vm07:/var/lib/ceph/c9620084-1c86-11f1-bcc5-e3fb709eab0a/config/ceph.conf 2026-03-10T13:42:53.653 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:53 vm00 bash[20748]: cephadm 2026-03-10T13:42:52.708110+0000 mgr.a (mgr.14150) 21 : cephadm [INF] Updating vm07:/var/lib/ceph/c9620084-1c86-11f1-bcc5-e3fb709eab0a/config/ceph.conf 2026-03-10T13:42:53.653 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:53 vm00 bash[20748]: cephadm 2026-03-10T13:42:52.743735+0000 mgr.a (mgr.14150) 22 : cephadm [INF] Updating vm07:/etc/ceph/ceph.client.admin.keyring 2026-03-10T13:42:53.653 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:53 vm00 bash[20748]: cephadm 2026-03-10T13:42:52.743735+0000 mgr.a (mgr.14150) 22 : cephadm [INF] Updating vm07:/etc/ceph/ceph.client.admin.keyring 2026-03-10T13:42:53.653 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:53 vm00 bash[20748]: cephadm 2026-03-10T13:42:52.779493+0000 mgr.a (mgr.14150) 23 : cephadm [INF] Updating vm07:/var/lib/ceph/c9620084-1c86-11f1-bcc5-e3fb709eab0a/config/ceph.client.admin.keyring 2026-03-10T13:42:53.653 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:53 vm00 bash[20748]: cephadm 2026-03-10T13:42:52.779493+0000 mgr.a (mgr.14150) 23 : cephadm [INF] Updating vm07:/var/lib/ceph/c9620084-1c86-11f1-bcc5-e3fb709eab0a/config/ceph.client.admin.keyring 2026-03-10T13:42:53.653 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:53 vm00 bash[20748]: audit 2026-03-10T13:42:52.818915+0000 mon.a (mon.0) 125 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:42:53.653 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:53 vm00 bash[20748]: audit 2026-03-10T13:42:52.818915+0000 mon.a (mon.0) 125 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:42:53.653 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:53 vm00 bash[20748]: audit 2026-03-10T13:42:52.821101+0000 mon.a (mon.0) 126 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:42:53.653 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:53 vm00 bash[20748]: audit 2026-03-10T13:42:52.821101+0000 mon.a (mon.0) 126 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:42:53.653 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:53 vm00 bash[20748]: audit 2026-03-10T13:42:52.823223+0000 mon.a (mon.0) 127 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:42:53.653 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:53 vm00 bash[20748]: audit 2026-03-10T13:42:52.823223+0000 mon.a (mon.0) 127 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:42:53.653 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:53 vm00 bash[20748]: audit 2026-03-10T13:42:52.936512+0000 mgr.a (mgr.14150) 24 : audit [DBG] from='client.14176 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-10T13:42:53.653 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:53 vm00 bash[20748]: audit 2026-03-10T13:42:52.936512+0000 mgr.a (mgr.14150) 24 : audit [DBG] from='client.14176 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-10T13:42:55.966 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:55 vm00 bash[20748]: cluster 2026-03-10T13:42:54.200828+0000 mgr.a (mgr.14150) 25 : cluster [DBG] pgmap v5: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T13:42:55.966 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:55 vm00 bash[20748]: cluster 2026-03-10T13:42:54.200828+0000 mgr.a (mgr.14150) 25 : cluster [DBG] pgmap v5: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T13:42:57.683 INFO:teuthology.orchestra.run.vm00.stderr:Inferring config /var/lib/ceph/c9620084-1c86-11f1-bcc5-e3fb709eab0a/mon.a/config 2026-03-10T13:42:57.966 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:57 vm00 bash[20748]: cluster 2026-03-10T13:42:56.200989+0000 mgr.a (mgr.14150) 26 : cluster [DBG] pgmap v6: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T13:42:57.966 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:57 vm00 bash[20748]: cluster 2026-03-10T13:42:56.200989+0000 mgr.a (mgr.14150) 26 : cluster [DBG] pgmap v6: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T13:42:58.966 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:58 vm00 bash[20748]: audit 2026-03-10T13:42:57.964245+0000 mgr.a (mgr.14150) 27 : audit [DBG] from='client.14178 -' entity='client.admin' cmd=[{"prefix": "orch host add", "hostname": "vm08", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T13:42:58.966 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:58 vm00 bash[20748]: audit 2026-03-10T13:42:57.964245+0000 mgr.a (mgr.14150) 27 : audit [DBG] from='client.14178 -' entity='client.admin' cmd=[{"prefix": "orch host add", "hostname": "vm08", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T13:42:59.691 INFO:teuthology.orchestra.run.vm00.stdout:Added host 'vm08' with addr '192.168.123.108' 2026-03-10T13:42:59.752 DEBUG:teuthology.orchestra.run.vm00:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid c9620084-1c86-11f1-bcc5-e3fb709eab0a -- ceph orch host ls --format=json 2026-03-10T13:42:59.966 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:59 vm00 bash[20748]: cluster 2026-03-10T13:42:58.201190+0000 mgr.a (mgr.14150) 28 : cluster [DBG] pgmap v7: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T13:42:59.967 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:59 vm00 bash[20748]: cluster 2026-03-10T13:42:58.201190+0000 mgr.a (mgr.14150) 28 : cluster [DBG] pgmap v7: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T13:42:59.967 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:59 vm00 bash[20748]: cephadm 2026-03-10T13:42:58.468705+0000 mgr.a (mgr.14150) 29 : cephadm [INF] Deploying cephadm binary to vm08 2026-03-10T13:42:59.967 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:59 vm00 bash[20748]: cephadm 2026-03-10T13:42:58.468705+0000 mgr.a (mgr.14150) 29 : cephadm [INF] Deploying cephadm binary to vm08 2026-03-10T13:42:59.967 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:59 vm00 bash[20748]: audit 2026-03-10T13:42:59.691914+0000 mon.a (mon.0) 128 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:42:59.967 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:59 vm00 bash[20748]: audit 2026-03-10T13:42:59.691914+0000 mon.a (mon.0) 128 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:42:59.967 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:59 vm00 bash[20748]: audit 2026-03-10T13:42:59.692436+0000 mon.a (mon.0) 129 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T13:42:59.967 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:42:59 vm00 bash[20748]: audit 2026-03-10T13:42:59.692436+0000 mon.a (mon.0) 129 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T13:43:01.216 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:43:00 vm00 bash[20748]: cephadm 2026-03-10T13:42:59.692207+0000 mgr.a (mgr.14150) 30 : cephadm [INF] Added host vm08 2026-03-10T13:43:01.216 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:43:00 vm00 bash[20748]: cephadm 2026-03-10T13:42:59.692207+0000 mgr.a (mgr.14150) 30 : cephadm [INF] Added host vm08 2026-03-10T13:43:01.216 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:43:00 vm00 bash[20748]: audit 2026-03-10T13:43:00.193532+0000 mon.a (mon.0) 130 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:43:01.216 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:43:00 vm00 bash[20748]: audit 2026-03-10T13:43:00.193532+0000 mon.a (mon.0) 130 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:43:02.216 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:43:01 vm00 bash[20748]: cluster 2026-03-10T13:43:00.201373+0000 mgr.a (mgr.14150) 31 : cluster [DBG] pgmap v8: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T13:43:02.216 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:43:01 vm00 bash[20748]: cluster 2026-03-10T13:43:00.201373+0000 mgr.a (mgr.14150) 31 : cluster [DBG] pgmap v8: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T13:43:02.216 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:43:01 vm00 bash[20748]: audit 2026-03-10T13:43:01.472920+0000 mon.a (mon.0) 131 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:43:02.216 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:43:01 vm00 bash[20748]: audit 2026-03-10T13:43:01.472920+0000 mon.a (mon.0) 131 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:43:03.466 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:43:03 vm00 bash[20748]: audit 2026-03-10T13:43:02.031951+0000 mon.a (mon.0) 132 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:43:03.466 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:43:03 vm00 bash[20748]: audit 2026-03-10T13:43:02.031951+0000 mon.a (mon.0) 132 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:43:03.466 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:43:03 vm00 bash[20748]: cluster 2026-03-10T13:43:02.201603+0000 mgr.a (mgr.14150) 32 : cluster [DBG] pgmap v9: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T13:43:03.466 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:43:03 vm00 bash[20748]: cluster 2026-03-10T13:43:02.201603+0000 mgr.a (mgr.14150) 32 : cluster [DBG] pgmap v9: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T13:43:04.374 INFO:teuthology.orchestra.run.vm00.stderr:Inferring config /var/lib/ceph/c9620084-1c86-11f1-bcc5-e3fb709eab0a/mon.a/config 2026-03-10T13:43:04.619 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-10T13:43:04.619 INFO:teuthology.orchestra.run.vm00.stdout:[{"addr": "192.168.123.100", "hostname": "vm00", "labels": [], "status": ""}, {"addr": "192.168.123.107", "hostname": "vm07", "labels": [], "status": ""}, {"addr": "192.168.123.108", "hostname": "vm08", "labels": [], "status": ""}] 2026-03-10T13:43:04.674 INFO:tasks.cephadm:Setting crush tunables to default 2026-03-10T13:43:04.674 DEBUG:teuthology.orchestra.run.vm00:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid c9620084-1c86-11f1-bcc5-e3fb709eab0a -- ceph osd crush tunables default 2026-03-10T13:43:06.216 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:43:05 vm00 bash[20748]: cluster 2026-03-10T13:43:04.201823+0000 mgr.a (mgr.14150) 33 : cluster [DBG] pgmap v10: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T13:43:06.216 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:43:05 vm00 bash[20748]: cluster 2026-03-10T13:43:04.201823+0000 mgr.a (mgr.14150) 33 : cluster [DBG] pgmap v10: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T13:43:06.216 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:43:05 vm00 bash[20748]: audit 2026-03-10T13:43:04.619992+0000 mgr.a (mgr.14150) 34 : audit [DBG] from='client.14180 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-10T13:43:06.217 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:43:05 vm00 bash[20748]: audit 2026-03-10T13:43:04.619992+0000 mgr.a (mgr.14150) 34 : audit [DBG] from='client.14180 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-10T13:43:06.217 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:43:05 vm00 bash[20748]: audit 2026-03-10T13:43:04.901849+0000 mon.a (mon.0) 133 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:43:06.217 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:43:05 vm00 bash[20748]: audit 2026-03-10T13:43:04.901849+0000 mon.a (mon.0) 133 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:43:06.217 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:43:05 vm00 bash[20748]: audit 2026-03-10T13:43:04.903915+0000 mon.a (mon.0) 134 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:43:06.217 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:43:05 vm00 bash[20748]: audit 2026-03-10T13:43:04.903915+0000 mon.a (mon.0) 134 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:43:06.217 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:43:05 vm00 bash[20748]: audit 2026-03-10T13:43:04.906449+0000 mon.a (mon.0) 135 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:43:06.217 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:43:05 vm00 bash[20748]: audit 2026-03-10T13:43:04.906449+0000 mon.a (mon.0) 135 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:43:06.217 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:43:05 vm00 bash[20748]: audit 2026-03-10T13:43:04.908376+0000 mon.a (mon.0) 136 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:43:06.217 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:43:05 vm00 bash[20748]: audit 2026-03-10T13:43:04.908376+0000 mon.a (mon.0) 136 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:43:06.217 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:43:05 vm00 bash[20748]: audit 2026-03-10T13:43:04.908931+0000 mon.a (mon.0) 137 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "config rm", "who": "osd/host:vm08", "name": "osd_memory_target"}]: dispatch 2026-03-10T13:43:06.217 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:43:05 vm00 bash[20748]: audit 2026-03-10T13:43:04.908931+0000 mon.a (mon.0) 137 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "config rm", "who": "osd/host:vm08", "name": "osd_memory_target"}]: dispatch 2026-03-10T13:43:06.217 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:43:05 vm00 bash[20748]: audit 2026-03-10T13:43:04.909511+0000 mon.a (mon.0) 138 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T13:43:06.217 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:43:05 vm00 bash[20748]: audit 2026-03-10T13:43:04.909511+0000 mon.a (mon.0) 138 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T13:43:06.217 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:43:05 vm00 bash[20748]: audit 2026-03-10T13:43:04.909881+0000 mon.a (mon.0) 139 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T13:43:06.217 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:43:05 vm00 bash[20748]: audit 2026-03-10T13:43:04.909881+0000 mon.a (mon.0) 139 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T13:43:06.217 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:43:05 vm00 bash[20748]: cephadm 2026-03-10T13:43:04.910434+0000 mgr.a (mgr.14150) 35 : cephadm [INF] Updating vm08:/etc/ceph/ceph.conf 2026-03-10T13:43:06.217 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:43:05 vm00 bash[20748]: cephadm 2026-03-10T13:43:04.910434+0000 mgr.a (mgr.14150) 35 : cephadm [INF] Updating vm08:/etc/ceph/ceph.conf 2026-03-10T13:43:06.217 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:43:05 vm00 bash[20748]: cephadm 2026-03-10T13:43:04.942373+0000 mgr.a (mgr.14150) 36 : cephadm [INF] Updating vm08:/var/lib/ceph/c9620084-1c86-11f1-bcc5-e3fb709eab0a/config/ceph.conf 2026-03-10T13:43:06.217 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:43:05 vm00 bash[20748]: cephadm 2026-03-10T13:43:04.942373+0000 mgr.a (mgr.14150) 36 : cephadm [INF] Updating vm08:/var/lib/ceph/c9620084-1c86-11f1-bcc5-e3fb709eab0a/config/ceph.conf 2026-03-10T13:43:06.217 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:43:05 vm00 bash[20748]: cephadm 2026-03-10T13:43:04.972658+0000 mgr.a (mgr.14150) 37 : cephadm [INF] Updating vm08:/etc/ceph/ceph.client.admin.keyring 2026-03-10T13:43:06.217 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:43:05 vm00 bash[20748]: cephadm 2026-03-10T13:43:04.972658+0000 mgr.a (mgr.14150) 37 : cephadm [INF] Updating vm08:/etc/ceph/ceph.client.admin.keyring 2026-03-10T13:43:06.217 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:43:05 vm00 bash[20748]: cephadm 2026-03-10T13:43:05.002373+0000 mgr.a (mgr.14150) 38 : cephadm [INF] Updating vm08:/var/lib/ceph/c9620084-1c86-11f1-bcc5-e3fb709eab0a/config/ceph.client.admin.keyring 2026-03-10T13:43:06.217 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:43:05 vm00 bash[20748]: cephadm 2026-03-10T13:43:05.002373+0000 mgr.a (mgr.14150) 38 : cephadm [INF] Updating vm08:/var/lib/ceph/c9620084-1c86-11f1-bcc5-e3fb709eab0a/config/ceph.client.admin.keyring 2026-03-10T13:43:06.217 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:43:05 vm00 bash[20748]: audit 2026-03-10T13:43:05.035278+0000 mon.a (mon.0) 140 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:43:06.217 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:43:05 vm00 bash[20748]: audit 2026-03-10T13:43:05.035278+0000 mon.a (mon.0) 140 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:43:06.217 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:43:05 vm00 bash[20748]: audit 2026-03-10T13:43:05.037603+0000 mon.a (mon.0) 141 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:43:06.217 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:43:05 vm00 bash[20748]: audit 2026-03-10T13:43:05.037603+0000 mon.a (mon.0) 141 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:43:06.217 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:43:05 vm00 bash[20748]: audit 2026-03-10T13:43:05.039725+0000 mon.a (mon.0) 142 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:43:06.217 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:43:05 vm00 bash[20748]: audit 2026-03-10T13:43:05.039725+0000 mon.a (mon.0) 142 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:43:08.382 INFO:teuthology.orchestra.run.vm00.stderr:Inferring config /var/lib/ceph/c9620084-1c86-11f1-bcc5-e3fb709eab0a/mon.a/config 2026-03-10T13:43:08.394 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:43:07 vm00 bash[20748]: cluster 2026-03-10T13:43:06.201989+0000 mgr.a (mgr.14150) 39 : cluster [DBG] pgmap v11: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T13:43:08.394 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:43:07 vm00 bash[20748]: cluster 2026-03-10T13:43:06.201989+0000 mgr.a (mgr.14150) 39 : cluster [DBG] pgmap v11: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T13:43:08.973 INFO:teuthology.orchestra.run.vm00.stderr:adjusted tunables profile to default 2026-03-10T13:43:09.029 INFO:tasks.cephadm:Adding mon.a on vm00 2026-03-10T13:43:09.030 INFO:tasks.cephadm:Adding mon.b on vm07 2026-03-10T13:43:09.030 INFO:tasks.cephadm:Adding mon.c on vm08 2026-03-10T13:43:09.030 DEBUG:teuthology.orchestra.run.vm08:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid c9620084-1c86-11f1-bcc5-e3fb709eab0a -- ceph orch apply mon '3;vm00:192.168.123.100=a;vm07:192.168.123.107=b;vm08:192.168.123.108=c' 2026-03-10T13:43:09.393 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:43:08 vm00 bash[20748]: audit 2026-03-10T13:43:08.641382+0000 mon.a (mon.0) 143 : audit [INF] from='client.? 192.168.123.100:0/4173021281' entity='client.admin' cmd=[{"prefix": "osd crush tunables", "profile": "default"}]: dispatch 2026-03-10T13:43:09.393 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:43:08 vm00 bash[20748]: audit 2026-03-10T13:43:08.641382+0000 mon.a (mon.0) 143 : audit [INF] from='client.? 192.168.123.100:0/4173021281' entity='client.admin' cmd=[{"prefix": "osd crush tunables", "profile": "default"}]: dispatch 2026-03-10T13:43:10.144 INFO:teuthology.orchestra.run.vm08.stderr:Inferring config /var/lib/ceph/c9620084-1c86-11f1-bcc5-e3fb709eab0a/config/ceph.conf 2026-03-10T13:43:10.394 INFO:teuthology.orchestra.run.vm08.stdout:Scheduled mon update... 2026-03-10T13:43:10.466 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:43:09 vm00 bash[20748]: cluster 2026-03-10T13:43:08.202137+0000 mgr.a (mgr.14150) 40 : cluster [DBG] pgmap v12: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T13:43:10.466 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:43:09 vm00 bash[20748]: cluster 2026-03-10T13:43:08.202137+0000 mgr.a (mgr.14150) 40 : cluster [DBG] pgmap v12: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T13:43:10.466 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:43:09 vm00 bash[20748]: audit 2026-03-10T13:43:08.973838+0000 mon.a (mon.0) 144 : audit [INF] from='client.? 192.168.123.100:0/4173021281' entity='client.admin' cmd='[{"prefix": "osd crush tunables", "profile": "default"}]': finished 2026-03-10T13:43:10.467 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:43:09 vm00 bash[20748]: audit 2026-03-10T13:43:08.973838+0000 mon.a (mon.0) 144 : audit [INF] from='client.? 192.168.123.100:0/4173021281' entity='client.admin' cmd='[{"prefix": "osd crush tunables", "profile": "default"}]': finished 2026-03-10T13:43:10.467 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:43:09 vm00 bash[20748]: cluster 2026-03-10T13:43:08.976200+0000 mon.a (mon.0) 145 : cluster [DBG] osdmap e4: 0 total, 0 up, 0 in 2026-03-10T13:43:10.467 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:43:09 vm00 bash[20748]: cluster 2026-03-10T13:43:08.976200+0000 mon.a (mon.0) 145 : cluster [DBG] osdmap e4: 0 total, 0 up, 0 in 2026-03-10T13:43:10.471 DEBUG:teuthology.orchestra.run.vm07:mon.b> sudo journalctl -f -n 0 -u ceph-c9620084-1c86-11f1-bcc5-e3fb709eab0a@mon.b.service 2026-03-10T13:43:10.472 DEBUG:teuthology.orchestra.run.vm08:mon.c> sudo journalctl -f -n 0 -u ceph-c9620084-1c86-11f1-bcc5-e3fb709eab0a@mon.c.service 2026-03-10T13:43:10.473 INFO:tasks.cephadm:Waiting for 3 mons in monmap... 2026-03-10T13:43:10.473 DEBUG:teuthology.orchestra.run.vm08:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid c9620084-1c86-11f1-bcc5-e3fb709eab0a -- ceph mon dump -f json 2026-03-10T13:43:11.619 INFO:teuthology.orchestra.run.vm08.stderr:Inferring config /var/lib/ceph/c9620084-1c86-11f1-bcc5-e3fb709eab0a/mon.c/config 2026-03-10T13:43:11.716 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:43:11 vm00 bash[20748]: cluster 2026-03-10T13:43:10.202300+0000 mgr.a (mgr.14150) 41 : cluster [DBG] pgmap v14: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T13:43:11.717 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:43:11 vm00 bash[20748]: cluster 2026-03-10T13:43:10.202300+0000 mgr.a (mgr.14150) 41 : cluster [DBG] pgmap v14: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T13:43:11.717 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:43:11 vm00 bash[20748]: audit 2026-03-10T13:43:10.391017+0000 mgr.a (mgr.14150) 42 : audit [DBG] from='client.14184 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mon", "placement": "3;vm00:192.168.123.100=a;vm07:192.168.123.107=b;vm08:192.168.123.108=c", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T13:43:11.717 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:43:11 vm00 bash[20748]: audit 2026-03-10T13:43:10.391017+0000 mgr.a (mgr.14150) 42 : audit [DBG] from='client.14184 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mon", "placement": "3;vm00:192.168.123.100=a;vm07:192.168.123.107=b;vm08:192.168.123.108=c", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T13:43:11.717 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:43:11 vm00 bash[20748]: cephadm 2026-03-10T13:43:10.392184+0000 mgr.a (mgr.14150) 43 : cephadm [INF] Saving service mon spec with placement vm00:192.168.123.100=a;vm07:192.168.123.107=b;vm08:192.168.123.108=c;count:3 2026-03-10T13:43:11.717 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:43:11 vm00 bash[20748]: cephadm 2026-03-10T13:43:10.392184+0000 mgr.a (mgr.14150) 43 : cephadm [INF] Saving service mon spec with placement vm00:192.168.123.100=a;vm07:192.168.123.107=b;vm08:192.168.123.108=c;count:3 2026-03-10T13:43:11.717 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:43:11 vm00 bash[20748]: audit 2026-03-10T13:43:10.394697+0000 mon.a (mon.0) 146 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:43:11.717 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:43:11 vm00 bash[20748]: audit 2026-03-10T13:43:10.394697+0000 mon.a (mon.0) 146 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:43:11.717 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:43:11 vm00 bash[20748]: audit 2026-03-10T13:43:10.395161+0000 mon.a (mon.0) 147 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T13:43:11.717 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:43:11 vm00 bash[20748]: audit 2026-03-10T13:43:10.395161+0000 mon.a (mon.0) 147 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T13:43:11.717 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:43:11 vm00 bash[20748]: audit 2026-03-10T13:43:10.396154+0000 mon.a (mon.0) 148 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T13:43:11.717 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:43:11 vm00 bash[20748]: audit 2026-03-10T13:43:10.396154+0000 mon.a (mon.0) 148 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T13:43:11.717 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:43:11 vm00 bash[20748]: audit 2026-03-10T13:43:10.396537+0000 mon.a (mon.0) 149 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T13:43:11.717 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:43:11 vm00 bash[20748]: audit 2026-03-10T13:43:10.396537+0000 mon.a (mon.0) 149 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T13:43:11.717 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:43:11 vm00 bash[20748]: audit 2026-03-10T13:43:10.399538+0000 mon.a (mon.0) 150 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:43:11.717 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:43:11 vm00 bash[20748]: audit 2026-03-10T13:43:10.399538+0000 mon.a (mon.0) 150 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:43:11.717 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:43:11 vm00 bash[20748]: audit 2026-03-10T13:43:10.400445+0000 mon.a (mon.0) 151 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-10T13:43:11.717 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:43:11 vm00 bash[20748]: audit 2026-03-10T13:43:10.400445+0000 mon.a (mon.0) 151 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-10T13:43:11.717 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:43:11 vm00 bash[20748]: audit 2026-03-10T13:43:10.400791+0000 mon.a (mon.0) 152 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T13:43:11.717 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:43:11 vm00 bash[20748]: audit 2026-03-10T13:43:10.400791+0000 mon.a (mon.0) 152 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T13:43:11.717 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:43:11 vm00 bash[20748]: cephadm 2026-03-10T13:43:10.401283+0000 mgr.a (mgr.14150) 44 : cephadm [INF] Deploying daemon mon.c on vm08 2026-03-10T13:43:11.717 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:43:11 vm00 bash[20748]: cephadm 2026-03-10T13:43:10.401283+0000 mgr.a (mgr.14150) 44 : cephadm [INF] Deploying daemon mon.c on vm08 2026-03-10T13:43:12.011 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:11 vm08 bash[23387]: debug 2026-03-10T13:43:11.994+0000 7fe361aced80 4 rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #4 mode 2 2026-03-10T13:43:12.012 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:12 vm08 bash[23387]: debug 2026-03-10T13:43:12.010+0000 7fe361aced80 4 rocksdb: EVENT_LOG_v1 {"time_micros": 1773150192012764, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 8, "file_size": 1643, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 1, "largest_seqno": 5, "table_properties": {"data_size": 523, "index_size": 31, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 115, "raw_average_key_size": 23, "raw_value_size": 401, "raw_average_value_size": 80, "num_data_blocks": 1, "num_entries": 5, "num_filter_entries": 5, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1773150191, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "6c9b08a4-0450-4da2-a302-35a1ff31b0eb", "db_session_id": "5T9OAFNL53GVNA0EAOXU", "orig_file_number": 8, "seqno_to_time_mapping": "N/A"}} 2026-03-10T13:43:12.338 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:12 vm08 bash[23387]: debug 2026-03-10T13:43:12.010+0000 7fe361aced80 4 rocksdb: EVENT_LOG_v1 {"time_micros": 1773150192013286, "job": 1, "event": "recovery_finished"} 2026-03-10T13:43:12.338 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:12 vm08 bash[23387]: debug 2026-03-10T13:43:12.010+0000 7fe361aced80 4 rocksdb: [db/version_set.cc:5047] Creating manifest 10 2026-03-10T13:43:12.338 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:12 vm08 bash[23387]: debug 2026-03-10T13:43:12.014+0000 7fe361aced80 4 rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-c/store.db/000004.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 2026-03-10T13:43:12.338 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:12 vm08 bash[23387]: debug 2026-03-10T13:43:12.014+0000 7fe361aced80 4 rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x5573d6548e00 2026-03-10T13:43:12.338 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:12 vm08 bash[23387]: debug 2026-03-10T13:43:12.014+0000 7fe361aced80 4 rocksdb: DB pointer 0x5573d6654000 2026-03-10T13:43:12.338 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:12 vm08 bash[23387]: debug 2026-03-10T13:43:12.014+0000 7fe361aced80 0 mon.c does not exist in monmap, will attempt to join an existing cluster 2026-03-10T13:43:12.338 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:12 vm08 bash[23387]: debug 2026-03-10T13:43:12.014+0000 7fe361aced80 0 using public_addr v2:192.168.123.108:0/0 -> [v2:192.168.123.108:3300/0,v1:192.168.123.108:6789/0] 2026-03-10T13:43:12.338 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:12 vm08 bash[23387]: debug 2026-03-10T13:43:12.014+0000 7fe361aced80 0 starting mon.c rank -1 at public addrs [v2:192.168.123.108:3300/0,v1:192.168.123.108:6789/0] at bind addrs [v2:192.168.123.108:3300/0,v1:192.168.123.108:6789/0] mon_data /var/lib/ceph/mon/ceph-c fsid c9620084-1c86-11f1-bcc5-e3fb709eab0a 2026-03-10T13:43:12.338 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:12 vm08 bash[23387]: debug 2026-03-10T13:43:12.014+0000 7fe361aced80 1 mon.c@-1(???) e0 preinit fsid c9620084-1c86-11f1-bcc5-e3fb709eab0a 2026-03-10T13:43:12.338 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:12 vm08 bash[23387]: debug 2026-03-10T13:43:12.018+0000 7fe357898640 4 rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS ------- 2026-03-10T13:43:12.338 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:12 vm08 bash[23387]: debug 2026-03-10T13:43:12.018+0000 7fe357898640 4 rocksdb: [db/db_impl/db_impl.cc:1111] 2026-03-10T13:43:12.338 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:12 vm08 bash[23387]: ** DB Stats ** 2026-03-10T13:43:12.339 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:12 vm08 bash[23387]: Uptime(secs): 0.0 total, 0.0 interval 2026-03-10T13:43:12.339 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:12 vm08 bash[23387]: Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s 2026-03-10T13:43:12.339 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:12 vm08 bash[23387]: Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s 2026-03-10T13:43:12.339 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:12 vm08 bash[23387]: Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent 2026-03-10T13:43:12.339 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:12 vm08 bash[23387]: Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s 2026-03-10T13:43:12.339 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:12 vm08 bash[23387]: Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s 2026-03-10T13:43:12.339 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:12 vm08 bash[23387]: Interval stall: 00:00:0.000 H:M:S, 0.0 percent 2026-03-10T13:43:12.339 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:12 vm08 bash[23387]: ** Compaction Stats [default] ** 2026-03-10T13:43:12.339 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:12 vm08 bash[23387]: Level Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB) 2026-03-10T13:43:12.339 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:12 vm08 bash[23387]: ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ 2026-03-10T13:43:12.339 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:12 vm08 bash[23387]: L0 1/0 1.60 KB 0.2 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 0.1 0.01 0.00 1 0.015 0 0 0.0 0.0 2026-03-10T13:43:12.339 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:12 vm08 bash[23387]: Sum 1/0 1.60 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 0.1 0.01 0.00 1 0.015 0 0 0.0 0.0 2026-03-10T13:43:12.339 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:12 vm08 bash[23387]: Int 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 0.1 0.01 0.00 1 0.015 0 0 0.0 0.0 2026-03-10T13:43:12.339 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:12 vm08 bash[23387]: ** Compaction Stats [default] ** 2026-03-10T13:43:12.339 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:12 vm08 bash[23387]: Priority Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB) 2026-03-10T13:43:12.339 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:12 vm08 bash[23387]: --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- 2026-03-10T13:43:12.339 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:12 vm08 bash[23387]: User 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.1 0.01 0.00 1 0.015 0 0 0.0 0.0 2026-03-10T13:43:12.339 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:12 vm08 bash[23387]: Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0 2026-03-10T13:43:12.339 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:12 vm08 bash[23387]: Uptime(secs): 0.0 total, 0.0 interval 2026-03-10T13:43:12.339 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:12 vm08 bash[23387]: Flush(GB): cumulative 0.000, interval 0.000 2026-03-10T13:43:12.339 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:12 vm08 bash[23387]: AddFile(GB): cumulative 0.000, interval 0.000 2026-03-10T13:43:12.339 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:12 vm08 bash[23387]: AddFile(Total Files): cumulative 0, interval 0 2026-03-10T13:43:12.339 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:12 vm08 bash[23387]: AddFile(L0 Files): cumulative 0, interval 0 2026-03-10T13:43:12.339 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:12 vm08 bash[23387]: AddFile(Keys): cumulative 0, interval 0 2026-03-10T13:43:12.339 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:12 vm08 bash[23387]: Cumulative compaction: 0.00 GB write, 0.05 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds 2026-03-10T13:43:12.339 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:12 vm08 bash[23387]: Interval compaction: 0.00 GB write, 0.05 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds 2026-03-10T13:43:12.339 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:12 vm08 bash[23387]: Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count 2026-03-10T13:43:12.339 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:12 vm08 bash[23387]: Block cache BinnedLRUCache@0x5573d6547350#8 capacity: 512.00 MB usage: 0.86 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 0 last_secs: 1.9e-05 secs_since: 0 2026-03-10T13:43:12.339 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:12 vm08 bash[23387]: Block cache entry stats(count,size,portion): DataBlock(1,0.64 KB,0.00012219%) FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.11 KB,2.08616e-05%) Misc(1,0.00 KB,0%) 2026-03-10T13:43:12.339 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:12 vm08 bash[23387]: ** File Read Latency Histogram By Level [default] ** 2026-03-10T13:43:12.339 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:12 vm08 bash[23387]: debug 2026-03-10T13:43:12.038+0000 7fe35a89e640 0 mon.c@-1(synchronizing).mds e1 new map 2026-03-10T13:43:12.339 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:12 vm08 bash[23387]: debug 2026-03-10T13:43:12.038+0000 7fe35a89e640 0 mon.c@-1(synchronizing).mds e1 print_map 2026-03-10T13:43:12.339 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:12 vm08 bash[23387]: e1 2026-03-10T13:43:12.339 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:12 vm08 bash[23387]: btime 2026-03-10T13:42:08:641766+0000 2026-03-10T13:43:12.339 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:12 vm08 bash[23387]: enable_multiple, ever_enabled_multiple: 1,1 2026-03-10T13:43:12.339 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:12 vm08 bash[23387]: default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes} 2026-03-10T13:43:12.339 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:12 vm08 bash[23387]: legacy client fscid: -1 2026-03-10T13:43:12.339 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:12 vm08 bash[23387]: 2026-03-10T13:43:12.339 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:12 vm08 bash[23387]: No filesystems configured 2026-03-10T13:43:12.339 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:12 vm08 bash[23387]: debug 2026-03-10T13:43:12.038+0000 7fe35a89e640 1 mon.c@-1(synchronizing).osd e0 _set_cache_ratios kv ratio 0.25 inc ratio 0.375 full ratio 0.375 2026-03-10T13:43:12.339 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:12 vm08 bash[23387]: debug 2026-03-10T13:43:12.038+0000 7fe35a89e640 1 mon.c@-1(synchronizing).osd e0 register_cache_with_pcm pcm target: 2147483648 pcm max: 1020054732 pcm min: 134217728 inc_osd_cache size: 1 2026-03-10T13:43:12.339 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:12 vm08 bash[23387]: debug 2026-03-10T13:43:12.038+0000 7fe35a89e640 1 mon.c@-1(synchronizing).osd e1 e1: 0 total, 0 up, 0 in 2026-03-10T13:43:12.339 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:12 vm08 bash[23387]: debug 2026-03-10T13:43:12.038+0000 7fe35a89e640 1 mon.c@-1(synchronizing).osd e2 e2: 0 total, 0 up, 0 in 2026-03-10T13:43:12.339 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:12 vm08 bash[23387]: debug 2026-03-10T13:43:12.038+0000 7fe35a89e640 1 mon.c@-1(synchronizing).osd e3 e3: 0 total, 0 up, 0 in 2026-03-10T13:43:12.339 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:12 vm08 bash[23387]: debug 2026-03-10T13:43:12.038+0000 7fe35a89e640 1 mon.c@-1(synchronizing).osd e4 e4: 0 total, 0 up, 0 in 2026-03-10T13:43:12.339 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:12 vm08 bash[23387]: debug 2026-03-10T13:43:12.038+0000 7fe35a89e640 0 mon.c@-1(synchronizing).osd e4 crush map has features 3314932999778484224, adjusting msgr requires 2026-03-10T13:43:12.339 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:12 vm08 bash[23387]: debug 2026-03-10T13:43:12.038+0000 7fe35a89e640 0 mon.c@-1(synchronizing).osd e4 crush map has features 288514050185494528, adjusting msgr requires 2026-03-10T13:43:12.339 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:12 vm08 bash[23387]: debug 2026-03-10T13:43:12.038+0000 7fe35a89e640 0 mon.c@-1(synchronizing).osd e4 crush map has features 288514050185494528, adjusting msgr requires 2026-03-10T13:43:12.339 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:12 vm08 bash[23387]: debug 2026-03-10T13:43:12.038+0000 7fe35a89e640 0 mon.c@-1(synchronizing).osd e4 crush map has features 288514050185494528, adjusting msgr requires 2026-03-10T13:43:12.339 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:12 vm08 bash[23387]: cluster 2026-03-10T13:42:08.642278+0000 mon.a (mon.0) 0 : cluster [INF] mkfs c9620084-1c86-11f1-bcc5-e3fb709eab0a 2026-03-10T13:43:12.339 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:12 vm08 bash[23387]: cluster 2026-03-10T13:42:08.642278+0000 mon.a (mon.0) 0 : cluster [INF] mkfs c9620084-1c86-11f1-bcc5-e3fb709eab0a 2026-03-10T13:43:12.339 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:12 vm08 bash[23387]: cluster 2026-03-10T13:42:08.635678+0000 mon.a (mon.0) 1 : cluster [INF] mon.a is new leader, mons a in quorum (ranks 0) 2026-03-10T13:43:12.339 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:12 vm08 bash[23387]: cluster 2026-03-10T13:42:08.635678+0000 mon.a (mon.0) 1 : cluster [INF] mon.a is new leader, mons a in quorum (ranks 0) 2026-03-10T13:43:12.339 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:12 vm08 bash[23387]: cluster 2026-03-10T13:42:08.641147+0000 mon.a (mon.0) 2 : cluster [INF] mon.a is new leader, mons a in quorum (ranks 0) 2026-03-10T13:43:12.339 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:12 vm08 bash[23387]: cluster 2026-03-10T13:42:08.641147+0000 mon.a (mon.0) 2 : cluster [INF] mon.a is new leader, mons a in quorum (ranks 0) 2026-03-10T13:43:12.339 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:12 vm08 bash[23387]: cluster 2026-03-10T13:42:08.641524+0000 mon.a (mon.0) 3 : cluster [DBG] monmap epoch 1 2026-03-10T13:43:12.339 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:12 vm08 bash[23387]: cluster 2026-03-10T13:42:08.641524+0000 mon.a (mon.0) 3 : cluster [DBG] monmap epoch 1 2026-03-10T13:43:12.339 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:12 vm08 bash[23387]: cluster 2026-03-10T13:42:08.641529+0000 mon.a (mon.0) 4 : cluster [DBG] fsid c9620084-1c86-11f1-bcc5-e3fb709eab0a 2026-03-10T13:43:12.339 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:12 vm08 bash[23387]: cluster 2026-03-10T13:42:08.641529+0000 mon.a (mon.0) 4 : cluster [DBG] fsid c9620084-1c86-11f1-bcc5-e3fb709eab0a 2026-03-10T13:43:12.339 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:12 vm08 bash[23387]: cluster 2026-03-10T13:42:08.641532+0000 mon.a (mon.0) 5 : cluster [DBG] last_changed 2026-03-10T13:42:07.014183+0000 2026-03-10T13:43:12.339 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:12 vm08 bash[23387]: cluster 2026-03-10T13:42:08.641532+0000 mon.a (mon.0) 5 : cluster [DBG] last_changed 2026-03-10T13:42:07.014183+0000 2026-03-10T13:43:12.340 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:12 vm08 bash[23387]: cluster 2026-03-10T13:42:08.641535+0000 mon.a (mon.0) 6 : cluster [DBG] created 2026-03-10T13:42:07.014183+0000 2026-03-10T13:43:12.340 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:12 vm08 bash[23387]: cluster 2026-03-10T13:42:08.641535+0000 mon.a (mon.0) 6 : cluster [DBG] created 2026-03-10T13:42:07.014183+0000 2026-03-10T13:43:12.340 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:12 vm08 bash[23387]: cluster 2026-03-10T13:42:08.641538+0000 mon.a (mon.0) 7 : cluster [DBG] min_mon_release 19 (squid) 2026-03-10T13:43:12.340 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:12 vm08 bash[23387]: cluster 2026-03-10T13:42:08.641538+0000 mon.a (mon.0) 7 : cluster [DBG] min_mon_release 19 (squid) 2026-03-10T13:43:12.340 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:12 vm08 bash[23387]: cluster 2026-03-10T13:42:08.641540+0000 mon.a (mon.0) 8 : cluster [DBG] election_strategy: 1 2026-03-10T13:43:12.340 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:12 vm08 bash[23387]: cluster 2026-03-10T13:42:08.641540+0000 mon.a (mon.0) 8 : cluster [DBG] election_strategy: 1 2026-03-10T13:43:12.340 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:12 vm08 bash[23387]: cluster 2026-03-10T13:42:08.641543+0000 mon.a (mon.0) 9 : cluster [DBG] 0: [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] mon.a 2026-03-10T13:43:12.340 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:12 vm08 bash[23387]: cluster 2026-03-10T13:42:08.641543+0000 mon.a (mon.0) 9 : cluster [DBG] 0: [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] mon.a 2026-03-10T13:43:12.340 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:12 vm08 bash[23387]: cluster 2026-03-10T13:42:08.644031+0000 mon.a (mon.0) 10 : cluster [DBG] fsmap 2026-03-10T13:43:12.340 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:12 vm08 bash[23387]: cluster 2026-03-10T13:42:08.644031+0000 mon.a (mon.0) 10 : cluster [DBG] fsmap 2026-03-10T13:43:12.340 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:12 vm08 bash[23387]: cluster 2026-03-10T13:42:08.649369+0000 mon.a (mon.0) 11 : cluster [DBG] osdmap e1: 0 total, 0 up, 0 in 2026-03-10T13:43:12.340 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:12 vm08 bash[23387]: cluster 2026-03-10T13:42:08.649369+0000 mon.a (mon.0) 11 : cluster [DBG] osdmap e1: 0 total, 0 up, 0 in 2026-03-10T13:43:12.340 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:12 vm08 bash[23387]: cluster 2026-03-10T13:42:08.651165+0000 mon.a (mon.0) 12 : cluster [DBG] mgrmap e1: no daemons active 2026-03-10T13:43:12.340 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:12 vm08 bash[23387]: cluster 2026-03-10T13:42:08.651165+0000 mon.a (mon.0) 12 : cluster [DBG] mgrmap e1: no daemons active 2026-03-10T13:43:12.340 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:12 vm08 bash[23387]: audit 2026-03-10T13:42:08.717487+0000 mon.a (mon.0) 13 : audit [DBG] from='client.? 192.168.123.100:0/674371790' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch 2026-03-10T13:43:12.340 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:12 vm08 bash[23387]: audit 2026-03-10T13:42:08.717487+0000 mon.a (mon.0) 13 : audit [DBG] from='client.? 192.168.123.100:0/674371790' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch 2026-03-10T13:43:12.340 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:12 vm08 bash[23387]: audit 2026-03-10T13:42:09.261396+0000 mon.a (mon.0) 14 : audit [INF] from='client.? 192.168.123.100:0/1748079503' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch 2026-03-10T13:43:12.340 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:12 vm08 bash[23387]: audit 2026-03-10T13:42:09.261396+0000 mon.a (mon.0) 14 : audit [INF] from='client.? 192.168.123.100:0/1748079503' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch 2026-03-10T13:43:12.340 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:12 vm08 bash[23387]: audit 2026-03-10T13:42:09.264233+0000 mon.a (mon.0) 15 : audit [INF] from='client.? 192.168.123.100:0/1748079503' entity='client.admin' cmd='[{"prefix": "config assimilate-conf"}]': finished 2026-03-10T13:43:12.340 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:12 vm08 bash[23387]: audit 2026-03-10T13:42:09.264233+0000 mon.a (mon.0) 15 : audit [INF] from='client.? 192.168.123.100:0/1748079503' entity='client.admin' cmd='[{"prefix": "config assimilate-conf"}]': finished 2026-03-10T13:43:12.340 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:12 vm08 bash[23387]: cluster 2026-03-10T13:42:10.479804+0000 mon.a (mon.0) 1 : cluster [INF] mon.a is new leader, mons a in quorum (ranks 0) 2026-03-10T13:43:12.340 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:12 vm08 bash[23387]: cluster 2026-03-10T13:42:10.479804+0000 mon.a (mon.0) 1 : cluster [INF] mon.a is new leader, mons a in quorum (ranks 0) 2026-03-10T13:43:12.340 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:12 vm08 bash[23387]: cluster 2026-03-10T13:42:10.479856+0000 mon.a (mon.0) 2 : cluster [DBG] monmap epoch 1 2026-03-10T13:43:12.340 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:12 vm08 bash[23387]: cluster 2026-03-10T13:42:10.479856+0000 mon.a (mon.0) 2 : cluster [DBG] monmap epoch 1 2026-03-10T13:43:12.340 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:12 vm08 bash[23387]: cluster 2026-03-10T13:42:10.479861+0000 mon.a (mon.0) 3 : cluster [DBG] fsid c9620084-1c86-11f1-bcc5-e3fb709eab0a 2026-03-10T13:43:12.340 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:12 vm08 bash[23387]: cluster 2026-03-10T13:42:10.479861+0000 mon.a (mon.0) 3 : cluster [DBG] fsid c9620084-1c86-11f1-bcc5-e3fb709eab0a 2026-03-10T13:43:12.340 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:12 vm08 bash[23387]: cluster 2026-03-10T13:42:10.479866+0000 mon.a (mon.0) 4 : cluster [DBG] last_changed 2026-03-10T13:42:07.014183+0000 2026-03-10T13:43:12.340 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:12 vm08 bash[23387]: cluster 2026-03-10T13:42:10.479866+0000 mon.a (mon.0) 4 : cluster [DBG] last_changed 2026-03-10T13:42:07.014183+0000 2026-03-10T13:43:12.340 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:12 vm08 bash[23387]: cluster 2026-03-10T13:42:10.479875+0000 mon.a (mon.0) 5 : cluster [DBG] created 2026-03-10T13:42:07.014183+0000 2026-03-10T13:43:12.340 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:12 vm08 bash[23387]: cluster 2026-03-10T13:42:10.479875+0000 mon.a (mon.0) 5 : cluster [DBG] created 2026-03-10T13:42:07.014183+0000 2026-03-10T13:43:12.340 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:12 vm08 bash[23387]: cluster 2026-03-10T13:42:10.479880+0000 mon.a (mon.0) 6 : cluster [DBG] min_mon_release 19 (squid) 2026-03-10T13:43:12.340 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:12 vm08 bash[23387]: cluster 2026-03-10T13:42:10.479880+0000 mon.a (mon.0) 6 : cluster [DBG] min_mon_release 19 (squid) 2026-03-10T13:43:12.340 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:12 vm08 bash[23387]: cluster 2026-03-10T13:42:10.479885+0000 mon.a (mon.0) 7 : cluster [DBG] election_strategy: 1 2026-03-10T13:43:12.340 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:12 vm08 bash[23387]: cluster 2026-03-10T13:42:10.479885+0000 mon.a (mon.0) 7 : cluster [DBG] election_strategy: 1 2026-03-10T13:43:12.340 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:12 vm08 bash[23387]: cluster 2026-03-10T13:42:10.479890+0000 mon.a (mon.0) 8 : cluster [DBG] 0: [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] mon.a 2026-03-10T13:43:12.340 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:12 vm08 bash[23387]: cluster 2026-03-10T13:42:10.479890+0000 mon.a (mon.0) 8 : cluster [DBG] 0: [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] mon.a 2026-03-10T13:43:12.340 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:12 vm08 bash[23387]: cluster 2026-03-10T13:42:10.480203+0000 mon.a (mon.0) 9 : cluster [DBG] fsmap 2026-03-10T13:43:12.340 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:12 vm08 bash[23387]: cluster 2026-03-10T13:42:10.480203+0000 mon.a (mon.0) 9 : cluster [DBG] fsmap 2026-03-10T13:43:12.340 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:12 vm08 bash[23387]: cluster 2026-03-10T13:42:10.480219+0000 mon.a (mon.0) 10 : cluster [DBG] osdmap e1: 0 total, 0 up, 0 in 2026-03-10T13:43:12.340 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:12 vm08 bash[23387]: cluster 2026-03-10T13:42:10.480219+0000 mon.a (mon.0) 10 : cluster [DBG] osdmap e1: 0 total, 0 up, 0 in 2026-03-10T13:43:12.340 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:12 vm08 bash[23387]: cluster 2026-03-10T13:42:10.480948+0000 mon.a (mon.0) 11 : cluster [DBG] mgrmap e1: no daemons active 2026-03-10T13:43:12.340 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:12 vm08 bash[23387]: cluster 2026-03-10T13:42:10.480948+0000 mon.a (mon.0) 11 : cluster [DBG] mgrmap e1: no daemons active 2026-03-10T13:43:12.340 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:12 vm08 bash[23387]: audit 2026-03-10T13:42:10.561404+0000 mon.a (mon.0) 12 : audit [INF] from='client.? 192.168.123.100:0/2442875451' entity='client.admin' 2026-03-10T13:43:12.340 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:12 vm08 bash[23387]: audit 2026-03-10T13:42:10.561404+0000 mon.a (mon.0) 12 : audit [INF] from='client.? 192.168.123.100:0/2442875451' entity='client.admin' 2026-03-10T13:43:12.340 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:12 vm08 bash[23387]: audit 2026-03-10T13:42:11.136181+0000 mon.a (mon.0) 13 : audit [DBG] from='client.? 192.168.123.100:0/3276766251' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch 2026-03-10T13:43:12.340 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:12 vm08 bash[23387]: audit 2026-03-10T13:42:11.136181+0000 mon.a (mon.0) 13 : audit [DBG] from='client.? 192.168.123.100:0/3276766251' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch 2026-03-10T13:43:12.340 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:12 vm08 bash[23387]: audit 2026-03-10T13:42:13.400250+0000 mon.a (mon.0) 14 : audit [DBG] from='client.? 192.168.123.100:0/467314035' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch 2026-03-10T13:43:12.340 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:12 vm08 bash[23387]: audit 2026-03-10T13:42:13.400250+0000 mon.a (mon.0) 14 : audit [DBG] from='client.? 192.168.123.100:0/467314035' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch 2026-03-10T13:43:12.340 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:12 vm08 bash[23387]: cluster 2026-03-10T13:42:14.203046+0000 mon.a (mon.0) 15 : cluster [INF] Activating manager daemon a 2026-03-10T13:43:12.340 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:12 vm08 bash[23387]: cluster 2026-03-10T13:42:14.203046+0000 mon.a (mon.0) 15 : cluster [INF] Activating manager daemon a 2026-03-10T13:43:12.340 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:12 vm08 bash[23387]: cluster 2026-03-10T13:42:14.207364+0000 mon.a (mon.0) 16 : cluster [DBG] mgrmap e2: a(active, starting, since 0.00439295s) 2026-03-10T13:43:12.340 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:12 vm08 bash[23387]: cluster 2026-03-10T13:42:14.207364+0000 mon.a (mon.0) 16 : cluster [DBG] mgrmap e2: a(active, starting, since 0.00439295s) 2026-03-10T13:43:12.340 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:12 vm08 bash[23387]: audit 2026-03-10T13:42:14.210097+0000 mon.a (mon.0) 17 : audit [DBG] from='mgr.14100 192.168.123.100:0/1413049985' entity='mgr.a' cmd=[{"prefix": "mds metadata"}]: dispatch 2026-03-10T13:43:12.340 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:12 vm08 bash[23387]: audit 2026-03-10T13:42:14.210097+0000 mon.a (mon.0) 17 : audit [DBG] from='mgr.14100 192.168.123.100:0/1413049985' entity='mgr.a' cmd=[{"prefix": "mds metadata"}]: dispatch 2026-03-10T13:43:12.340 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:12 vm08 bash[23387]: audit 2026-03-10T13:42:14.210173+0000 mon.a (mon.0) 18 : audit [DBG] from='mgr.14100 192.168.123.100:0/1413049985' entity='mgr.a' cmd=[{"prefix": "osd metadata"}]: dispatch 2026-03-10T13:43:12.340 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:12 vm08 bash[23387]: audit 2026-03-10T13:42:14.210173+0000 mon.a (mon.0) 18 : audit [DBG] from='mgr.14100 192.168.123.100:0/1413049985' entity='mgr.a' cmd=[{"prefix": "osd metadata"}]: dispatch 2026-03-10T13:43:12.340 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:12 vm08 bash[23387]: audit 2026-03-10T13:42:14.210238+0000 mon.a (mon.0) 19 : audit [DBG] from='mgr.14100 192.168.123.100:0/1413049985' entity='mgr.a' cmd=[{"prefix": "mon metadata"}]: dispatch 2026-03-10T13:43:12.340 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:12 vm08 bash[23387]: audit 2026-03-10T13:42:14.210238+0000 mon.a (mon.0) 19 : audit [DBG] from='mgr.14100 192.168.123.100:0/1413049985' entity='mgr.a' cmd=[{"prefix": "mon metadata"}]: dispatch 2026-03-10T13:43:12.340 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:12 vm08 bash[23387]: audit 2026-03-10T13:42:14.210297+0000 mon.a (mon.0) 20 : audit [DBG] from='mgr.14100 192.168.123.100:0/1413049985' entity='mgr.a' cmd=[{"prefix": "mds metadata"}]: dispatch 2026-03-10T13:43:12.341 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:12 vm08 bash[23387]: audit 2026-03-10T13:42:14.210297+0000 mon.a (mon.0) 20 : audit [DBG] from='mgr.14100 192.168.123.100:0/1413049985' entity='mgr.a' cmd=[{"prefix": "mds metadata"}]: dispatch 2026-03-10T13:43:12.341 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:12 vm08 bash[23387]: audit 2026-03-10T13:42:14.210349+0000 mon.a (mon.0) 21 : audit [DBG] from='mgr.14100 192.168.123.100:0/1413049985' entity='mgr.a' cmd=[{"prefix": "osd metadata"}]: dispatch 2026-03-10T13:43:12.341 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:12 vm08 bash[23387]: audit 2026-03-10T13:42:14.210349+0000 mon.a (mon.0) 21 : audit [DBG] from='mgr.14100 192.168.123.100:0/1413049985' entity='mgr.a' cmd=[{"prefix": "osd metadata"}]: dispatch 2026-03-10T13:43:12.341 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:12 vm08 bash[23387]: audit 2026-03-10T13:42:14.210400+0000 mon.a (mon.0) 22 : audit [DBG] from='mgr.14100 192.168.123.100:0/1413049985' entity='mgr.a' cmd=[{"prefix": "mon metadata"}]: dispatch 2026-03-10T13:43:12.341 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:12 vm08 bash[23387]: audit 2026-03-10T13:42:14.210400+0000 mon.a (mon.0) 22 : audit [DBG] from='mgr.14100 192.168.123.100:0/1413049985' entity='mgr.a' cmd=[{"prefix": "mon metadata"}]: dispatch 2026-03-10T13:43:12.341 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:12 vm08 bash[23387]: audit 2026-03-10T13:42:14.211349+0000 mon.a (mon.0) 23 : audit [DBG] from='mgr.14100 192.168.123.100:0/1413049985' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-10T13:43:12.341 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:12 vm08 bash[23387]: audit 2026-03-10T13:42:14.211349+0000 mon.a (mon.0) 23 : audit [DBG] from='mgr.14100 192.168.123.100:0/1413049985' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-10T13:43:12.341 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:12 vm08 bash[23387]: audit 2026-03-10T13:42:14.211665+0000 mon.a (mon.0) 24 : audit [DBG] from='mgr.14100 192.168.123.100:0/1413049985' entity='mgr.a' cmd=[{"prefix": "mgr metadata", "who": "a", "id": "a"}]: dispatch 2026-03-10T13:43:12.341 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:12 vm08 bash[23387]: audit 2026-03-10T13:42:14.211665+0000 mon.a (mon.0) 24 : audit [DBG] from='mgr.14100 192.168.123.100:0/1413049985' entity='mgr.a' cmd=[{"prefix": "mgr metadata", "who": "a", "id": "a"}]: dispatch 2026-03-10T13:43:12.341 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:12 vm08 bash[23387]: cluster 2026-03-10T13:42:14.217577+0000 mon.a (mon.0) 25 : cluster [INF] Manager daemon a is now available 2026-03-10T13:43:12.341 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:12 vm08 bash[23387]: cluster 2026-03-10T13:42:14.217577+0000 mon.a (mon.0) 25 : cluster [INF] Manager daemon a is now available 2026-03-10T13:43:12.341 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:12 vm08 bash[23387]: audit 2026-03-10T13:42:14.228305+0000 mon.a (mon.0) 26 : audit [INF] from='mgr.14100 192.168.123.100:0/1413049985' entity='mgr.a' 2026-03-10T13:43:12.341 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:12 vm08 bash[23387]: audit 2026-03-10T13:42:14.228305+0000 mon.a (mon.0) 26 : audit [INF] from='mgr.14100 192.168.123.100:0/1413049985' entity='mgr.a' 2026-03-10T13:43:12.341 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:12 vm08 bash[23387]: audit 2026-03-10T13:42:14.230509+0000 mon.a (mon.0) 27 : audit [INF] from='mgr.14100 192.168.123.100:0/1413049985' entity='mgr.a' 2026-03-10T13:43:12.341 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:12 vm08 bash[23387]: audit 2026-03-10T13:42:14.230509+0000 mon.a (mon.0) 27 : audit [INF] from='mgr.14100 192.168.123.100:0/1413049985' entity='mgr.a' 2026-03-10T13:43:12.341 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:12 vm08 bash[23387]: audit 2026-03-10T13:42:14.232455+0000 mon.a (mon.0) 28 : audit [INF] from='mgr.14100 192.168.123.100:0/1413049985' entity='mgr.a' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/a/mirror_snapshot_schedule"}]: dispatch 2026-03-10T13:43:12.341 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:12 vm08 bash[23387]: audit 2026-03-10T13:42:14.232455+0000 mon.a (mon.0) 28 : audit [INF] from='mgr.14100 192.168.123.100:0/1413049985' entity='mgr.a' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/a/mirror_snapshot_schedule"}]: dispatch 2026-03-10T13:43:12.341 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:12 vm08 bash[23387]: audit 2026-03-10T13:42:14.233616+0000 mon.a (mon.0) 29 : audit [INF] from='mgr.14100 192.168.123.100:0/1413049985' entity='mgr.a' 2026-03-10T13:43:12.341 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:12 vm08 bash[23387]: audit 2026-03-10T13:42:14.233616+0000 mon.a (mon.0) 29 : audit [INF] from='mgr.14100 192.168.123.100:0/1413049985' entity='mgr.a' 2026-03-10T13:43:12.341 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:12 vm08 bash[23387]: audit 2026-03-10T13:42:14.234938+0000 mon.a (mon.0) 30 : audit [INF] from='mgr.14100 192.168.123.100:0/1413049985' entity='mgr.a' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/a/trash_purge_schedule"}]: dispatch 2026-03-10T13:43:12.341 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:12 vm08 bash[23387]: audit 2026-03-10T13:42:14.234938+0000 mon.a (mon.0) 30 : audit [INF] from='mgr.14100 192.168.123.100:0/1413049985' entity='mgr.a' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/a/trash_purge_schedule"}]: dispatch 2026-03-10T13:43:12.341 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:12 vm08 bash[23387]: cluster 2026-03-10T13:42:15.212458+0000 mon.a (mon.0) 31 : cluster [DBG] mgrmap e3: a(active, since 1.00949s) 2026-03-10T13:43:12.341 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:12 vm08 bash[23387]: cluster 2026-03-10T13:42:15.212458+0000 mon.a (mon.0) 31 : cluster [DBG] mgrmap e3: a(active, since 1.00949s) 2026-03-10T13:43:12.341 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:12 vm08 bash[23387]: audit 2026-03-10T13:42:15.701495+0000 mon.a (mon.0) 32 : audit [DBG] from='client.? 192.168.123.100:0/3957218727' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch 2026-03-10T13:43:12.341 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:12 vm08 bash[23387]: audit 2026-03-10T13:42:15.701495+0000 mon.a (mon.0) 32 : audit [DBG] from='client.? 192.168.123.100:0/3957218727' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch 2026-03-10T13:43:12.341 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:12 vm08 bash[23387]: audit 2026-03-10T13:42:15.974691+0000 mon.a (mon.0) 33 : audit [INF] from='client.? 192.168.123.100:0/3269230592' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch 2026-03-10T13:43:12.341 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:12 vm08 bash[23387]: audit 2026-03-10T13:42:15.974691+0000 mon.a (mon.0) 33 : audit [INF] from='client.? 192.168.123.100:0/3269230592' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch 2026-03-10T13:43:12.341 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:12 vm08 bash[23387]: audit 2026-03-10T13:42:15.977243+0000 mon.a (mon.0) 34 : audit [INF] from='client.? 192.168.123.100:0/3269230592' entity='client.admin' cmd='[{"prefix": "config assimilate-conf"}]': finished 2026-03-10T13:43:12.341 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:12 vm08 bash[23387]: audit 2026-03-10T13:42:15.977243+0000 mon.a (mon.0) 34 : audit [INF] from='client.? 192.168.123.100:0/3269230592' entity='client.admin' cmd='[{"prefix": "config assimilate-conf"}]': finished 2026-03-10T13:43:12.341 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:12 vm08 bash[23387]: audit 2026-03-10T13:42:16.258956+0000 mon.a (mon.0) 35 : audit [INF] from='client.? 192.168.123.100:0/2783189308' entity='client.admin' cmd=[{"prefix": "mgr module enable", "module": "cephadm"}]: dispatch 2026-03-10T13:43:12.341 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:12 vm08 bash[23387]: audit 2026-03-10T13:42:16.258956+0000 mon.a (mon.0) 35 : audit [INF] from='client.? 192.168.123.100:0/2783189308' entity='client.admin' cmd=[{"prefix": "mgr module enable", "module": "cephadm"}]: dispatch 2026-03-10T13:43:12.341 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:12 vm08 bash[23387]: audit 2026-03-10T13:42:16.978789+0000 mon.a (mon.0) 36 : audit [INF] from='client.? 192.168.123.100:0/2783189308' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "cephadm"}]': finished 2026-03-10T13:43:12.341 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:12 vm08 bash[23387]: audit 2026-03-10T13:42:16.978789+0000 mon.a (mon.0) 36 : audit [INF] from='client.? 192.168.123.100:0/2783189308' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "cephadm"}]': finished 2026-03-10T13:43:12.341 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:12 vm08 bash[23387]: cluster 2026-03-10T13:42:16.983709+0000 mon.a (mon.0) 37 : cluster [DBG] mgrmap e4: a(active, since 2s) 2026-03-10T13:43:12.341 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:12 vm08 bash[23387]: cluster 2026-03-10T13:42:16.983709+0000 mon.a (mon.0) 37 : cluster [DBG] mgrmap e4: a(active, since 2s) 2026-03-10T13:43:12.341 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:12 vm08 bash[23387]: audit 2026-03-10T13:42:17.339816+0000 mon.a (mon.0) 38 : audit [DBG] from='client.? 192.168.123.100:0/3329532155' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch 2026-03-10T13:43:12.341 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:12 vm08 bash[23387]: audit 2026-03-10T13:42:17.339816+0000 mon.a (mon.0) 38 : audit [DBG] from='client.? 192.168.123.100:0/3329532155' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch 2026-03-10T13:43:12.341 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:12 vm08 bash[23387]: cluster 2026-03-10T13:42:20.129919+0000 mon.a (mon.0) 39 : cluster [INF] Active manager daemon a restarted 2026-03-10T13:43:12.341 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:12 vm08 bash[23387]: cluster 2026-03-10T13:42:20.129919+0000 mon.a (mon.0) 39 : cluster [INF] Active manager daemon a restarted 2026-03-10T13:43:12.341 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:12 vm08 bash[23387]: cluster 2026-03-10T13:42:20.130407+0000 mon.a (mon.0) 40 : cluster [INF] Activating manager daemon a 2026-03-10T13:43:12.341 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:12 vm08 bash[23387]: cluster 2026-03-10T13:42:20.130407+0000 mon.a (mon.0) 40 : cluster [INF] Activating manager daemon a 2026-03-10T13:43:12.341 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:12 vm08 bash[23387]: cluster 2026-03-10T13:42:20.136700+0000 mon.a (mon.0) 41 : cluster [DBG] osdmap e2: 0 total, 0 up, 0 in 2026-03-10T13:43:12.341 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:12 vm08 bash[23387]: cluster 2026-03-10T13:42:20.136700+0000 mon.a (mon.0) 41 : cluster [DBG] osdmap e2: 0 total, 0 up, 0 in 2026-03-10T13:43:12.341 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:12 vm08 bash[23387]: cluster 2026-03-10T13:42:20.136875+0000 mon.a (mon.0) 42 : cluster [DBG] mgrmap e5: a(active, starting, since 0.00658721s) 2026-03-10T13:43:12.341 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:12 vm08 bash[23387]: cluster 2026-03-10T13:42:20.136875+0000 mon.a (mon.0) 42 : cluster [DBG] mgrmap e5: a(active, starting, since 0.00658721s) 2026-03-10T13:43:12.341 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:12 vm08 bash[23387]: audit 2026-03-10T13:42:20.138919+0000 mon.a (mon.0) 43 : audit [DBG] from='mgr.14118 192.168.123.100:0/997530128' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-10T13:43:12.341 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:12 vm08 bash[23387]: audit 2026-03-10T13:42:20.138919+0000 mon.a (mon.0) 43 : audit [DBG] from='mgr.14118 192.168.123.100:0/997530128' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-10T13:43:12.341 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:12 vm08 bash[23387]: audit 2026-03-10T13:42:20.140424+0000 mon.a (mon.0) 44 : audit [DBG] from='mgr.14118 192.168.123.100:0/997530128' entity='mgr.a' cmd=[{"prefix": "mgr metadata", "who": "a", "id": "a"}]: dispatch 2026-03-10T13:43:12.341 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:12 vm08 bash[23387]: audit 2026-03-10T13:42:20.140424+0000 mon.a (mon.0) 44 : audit [DBG] from='mgr.14118 192.168.123.100:0/997530128' entity='mgr.a' cmd=[{"prefix": "mgr metadata", "who": "a", "id": "a"}]: dispatch 2026-03-10T13:43:12.342 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:12 vm08 bash[23387]: audit 2026-03-10T13:42:20.140923+0000 mon.a (mon.0) 45 : audit [DBG] from='mgr.14118 192.168.123.100:0/997530128' entity='mgr.a' cmd=[{"prefix": "mds metadata"}]: dispatch 2026-03-10T13:43:12.342 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:12 vm08 bash[23387]: audit 2026-03-10T13:42:20.140923+0000 mon.a (mon.0) 45 : audit [DBG] from='mgr.14118 192.168.123.100:0/997530128' entity='mgr.a' cmd=[{"prefix": "mds metadata"}]: dispatch 2026-03-10T13:43:12.342 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:12 vm08 bash[23387]: audit 2026-03-10T13:42:20.141259+0000 mon.a (mon.0) 46 : audit [DBG] from='mgr.14118 192.168.123.100:0/997530128' entity='mgr.a' cmd=[{"prefix": "osd metadata"}]: dispatch 2026-03-10T13:43:12.342 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:12 vm08 bash[23387]: audit 2026-03-10T13:42:20.141259+0000 mon.a (mon.0) 46 : audit [DBG] from='mgr.14118 192.168.123.100:0/997530128' entity='mgr.a' cmd=[{"prefix": "osd metadata"}]: dispatch 2026-03-10T13:43:12.342 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:12 vm08 bash[23387]: audit 2026-03-10T13:42:20.141577+0000 mon.a (mon.0) 47 : audit [DBG] from='mgr.14118 192.168.123.100:0/997530128' entity='mgr.a' cmd=[{"prefix": "mon metadata"}]: dispatch 2026-03-10T13:43:12.342 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:12 vm08 bash[23387]: audit 2026-03-10T13:42:20.141577+0000 mon.a (mon.0) 47 : audit [DBG] from='mgr.14118 192.168.123.100:0/997530128' entity='mgr.a' cmd=[{"prefix": "mon metadata"}]: dispatch 2026-03-10T13:43:12.342 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:12 vm08 bash[23387]: cluster 2026-03-10T13:42:20.147498+0000 mon.a (mon.0) 48 : cluster [INF] Manager daemon a is now available 2026-03-10T13:43:12.342 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:12 vm08 bash[23387]: cluster 2026-03-10T13:42:20.147498+0000 mon.a (mon.0) 48 : cluster [INF] Manager daemon a is now available 2026-03-10T13:43:12.342 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:12 vm08 bash[23387]: audit 2026-03-10T13:42:20.157163+0000 mon.a (mon.0) 49 : audit [INF] from='mgr.14118 192.168.123.100:0/997530128' entity='mgr.a' 2026-03-10T13:43:12.342 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:12 vm08 bash[23387]: audit 2026-03-10T13:42:20.157163+0000 mon.a (mon.0) 49 : audit [INF] from='mgr.14118 192.168.123.100:0/997530128' entity='mgr.a' 2026-03-10T13:43:12.342 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:12 vm08 bash[23387]: audit 2026-03-10T13:42:20.161390+0000 mon.a (mon.0) 50 : audit [INF] from='mgr.14118 192.168.123.100:0/997530128' entity='mgr.a' 2026-03-10T13:43:12.342 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:12 vm08 bash[23387]: audit 2026-03-10T13:42:20.161390+0000 mon.a (mon.0) 50 : audit [INF] from='mgr.14118 192.168.123.100:0/997530128' entity='mgr.a' 2026-03-10T13:43:12.342 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:12 vm08 bash[23387]: audit 2026-03-10T13:42:20.175354+0000 mon.a (mon.0) 51 : audit [DBG] from='mgr.14118 192.168.123.100:0/997530128' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T13:43:12.342 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:12 vm08 bash[23387]: audit 2026-03-10T13:42:20.175354+0000 mon.a (mon.0) 51 : audit [DBG] from='mgr.14118 192.168.123.100:0/997530128' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T13:43:12.342 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:12 vm08 bash[23387]: audit 2026-03-10T13:42:20.177927+0000 mon.a (mon.0) 52 : audit [DBG] from='mgr.14118 192.168.123.100:0/997530128' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T13:43:12.342 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:12 vm08 bash[23387]: audit 2026-03-10T13:42:20.177927+0000 mon.a (mon.0) 52 : audit [DBG] from='mgr.14118 192.168.123.100:0/997530128' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T13:43:12.342 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:12 vm08 bash[23387]: audit 2026-03-10T13:42:20.179137+0000 mon.a (mon.0) 53 : audit [INF] from='mgr.14118 192.168.123.100:0/997530128' entity='mgr.a' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/a/mirror_snapshot_schedule"}]: dispatch 2026-03-10T13:43:12.342 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:12 vm08 bash[23387]: audit 2026-03-10T13:42:20.179137+0000 mon.a (mon.0) 53 : audit [INF] from='mgr.14118 192.168.123.100:0/997530128' entity='mgr.a' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/a/mirror_snapshot_schedule"}]: dispatch 2026-03-10T13:43:12.342 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:12 vm08 bash[23387]: cephadm 2026-03-10T13:42:20.154273+0000 mgr.a (mgr.14118) 1 : cephadm [INF] Found migration_current of "None". Setting to last migration. 2026-03-10T13:43:12.342 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:12 vm08 bash[23387]: cephadm 2026-03-10T13:42:20.154273+0000 mgr.a (mgr.14118) 1 : cephadm [INF] Found migration_current of "None". Setting to last migration. 2026-03-10T13:43:12.342 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:12 vm08 bash[23387]: audit 2026-03-10T13:42:20.192082+0000 mon.a (mon.0) 54 : audit [INF] from='mgr.14118 192.168.123.100:0/997530128' entity='mgr.a' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/a/trash_purge_schedule"}]: dispatch 2026-03-10T13:43:12.342 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:12 vm08 bash[23387]: audit 2026-03-10T13:42:20.192082+0000 mon.a (mon.0) 54 : audit [INF] from='mgr.14118 192.168.123.100:0/997530128' entity='mgr.a' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/a/trash_purge_schedule"}]: dispatch 2026-03-10T13:43:12.342 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:12 vm08 bash[23387]: cluster 2026-03-10T13:42:21.140551+0000 mon.a (mon.0) 55 : cluster [DBG] mgrmap e6: a(active, since 1.01026s) 2026-03-10T13:43:12.342 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:12 vm08 bash[23387]: cluster 2026-03-10T13:42:21.140551+0000 mon.a (mon.0) 55 : cluster [DBG] mgrmap e6: a(active, since 1.01026s) 2026-03-10T13:43:12.342 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:12 vm08 bash[23387]: audit 2026-03-10T13:42:21.142551+0000 mgr.a (mgr.14118) 2 : audit [DBG] from='client.14122 -' entity='client.admin' cmd=[{"prefix": "get_command_descriptions"}]: dispatch 2026-03-10T13:43:12.342 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:12 vm08 bash[23387]: audit 2026-03-10T13:42:21.142551+0000 mgr.a (mgr.14118) 2 : audit [DBG] from='client.14122 -' entity='client.admin' cmd=[{"prefix": "get_command_descriptions"}]: dispatch 2026-03-10T13:43:12.342 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:12 vm08 bash[23387]: audit 2026-03-10T13:42:21.146784+0000 mgr.a (mgr.14118) 3 : audit [DBG] from='client.14122 -' entity='client.admin' cmd=[{"prefix": "mgr_status"}]: dispatch 2026-03-10T13:43:12.342 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:12 vm08 bash[23387]: audit 2026-03-10T13:42:21.146784+0000 mgr.a (mgr.14118) 3 : audit [DBG] from='client.14122 -' entity='client.admin' cmd=[{"prefix": "mgr_status"}]: dispatch 2026-03-10T13:43:12.342 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:12 vm08 bash[23387]: audit 2026-03-10T13:42:21.225053+0000 mon.a (mon.0) 56 : audit [INF] from='mgr.14118 192.168.123.100:0/997530128' entity='mgr.a' 2026-03-10T13:43:12.342 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:12 vm08 bash[23387]: audit 2026-03-10T13:42:21.225053+0000 mon.a (mon.0) 56 : audit [INF] from='mgr.14118 192.168.123.100:0/997530128' entity='mgr.a' 2026-03-10T13:43:12.342 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:12 vm08 bash[23387]: audit 2026-03-10T13:42:21.229478+0000 mon.a (mon.0) 57 : audit [INF] from='mgr.14118 192.168.123.100:0/997530128' entity='mgr.a' 2026-03-10T13:43:12.342 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:12 vm08 bash[23387]: audit 2026-03-10T13:42:21.229478+0000 mon.a (mon.0) 57 : audit [INF] from='mgr.14118 192.168.123.100:0/997530128' entity='mgr.a' 2026-03-10T13:43:12.342 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:12 vm08 bash[23387]: audit 2026-03-10T13:42:21.449061+0000 mgr.a (mgr.14118) 4 : audit [DBG] from='client.14130 -' entity='client.admin' cmd=[{"prefix": "orch set backend", "module_name": "cephadm", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T13:43:12.342 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:12 vm08 bash[23387]: audit 2026-03-10T13:42:21.449061+0000 mgr.a (mgr.14118) 4 : audit [DBG] from='client.14130 -' entity='client.admin' cmd=[{"prefix": "orch set backend", "module_name": "cephadm", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T13:43:12.342 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:12 vm08 bash[23387]: audit 2026-03-10T13:42:21.514112+0000 mon.a (mon.0) 58 : audit [INF] from='mgr.14118 192.168.123.100:0/997530128' entity='mgr.a' 2026-03-10T13:43:12.342 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:12 vm08 bash[23387]: audit 2026-03-10T13:42:21.514112+0000 mon.a (mon.0) 58 : audit [INF] from='mgr.14118 192.168.123.100:0/997530128' entity='mgr.a' 2026-03-10T13:43:12.342 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:12 vm08 bash[23387]: audit 2026-03-10T13:42:21.521278+0000 mon.a (mon.0) 59 : audit [DBG] from='mgr.14118 192.168.123.100:0/997530128' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T13:43:12.342 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:12 vm08 bash[23387]: audit 2026-03-10T13:42:21.521278+0000 mon.a (mon.0) 59 : audit [DBG] from='mgr.14118 192.168.123.100:0/997530128' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T13:43:12.342 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:12 vm08 bash[23387]: cephadm 2026-03-10T13:42:21.808683+0000 mgr.a (mgr.14118) 5 : cephadm [INF] [10/Mar/2026:13:42:21] ENGINE Bus STARTING 2026-03-10T13:43:12.342 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:12 vm08 bash[23387]: cephadm 2026-03-10T13:42:21.808683+0000 mgr.a (mgr.14118) 5 : cephadm [INF] [10/Mar/2026:13:42:21] ENGINE Bus STARTING 2026-03-10T13:43:12.342 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:12 vm08 bash[23387]: audit 2026-03-10T13:42:21.856651+0000 mgr.a (mgr.14118) 6 : audit [DBG] from='client.14132 -' entity='client.admin' cmd=[{"prefix": "cephadm set-user", "user": "root", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T13:43:12.342 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:12 vm08 bash[23387]: audit 2026-03-10T13:42:21.856651+0000 mgr.a (mgr.14118) 6 : audit [DBG] from='client.14132 -' entity='client.admin' cmd=[{"prefix": "cephadm set-user", "user": "root", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T13:43:12.342 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:12 vm08 bash[23387]: cephadm 2026-03-10T13:42:21.921254+0000 mgr.a (mgr.14118) 7 : cephadm [INF] [10/Mar/2026:13:42:21] ENGINE Serving on https://192.168.123.100:7150 2026-03-10T13:43:12.342 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:12 vm08 bash[23387]: cephadm 2026-03-10T13:42:21.921254+0000 mgr.a (mgr.14118) 7 : cephadm [INF] [10/Mar/2026:13:42:21] ENGINE Serving on https://192.168.123.100:7150 2026-03-10T13:43:12.342 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:12 vm08 bash[23387]: cephadm 2026-03-10T13:42:21.921798+0000 mgr.a (mgr.14118) 8 : cephadm [INF] [10/Mar/2026:13:42:21] ENGINE Client ('192.168.123.100', 45222) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)') 2026-03-10T13:43:12.342 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:12 vm08 bash[23387]: cephadm 2026-03-10T13:42:21.921798+0000 mgr.a (mgr.14118) 8 : cephadm [INF] [10/Mar/2026:13:42:21] ENGINE Client ('192.168.123.100', 45222) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)') 2026-03-10T13:43:12.342 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:12 vm08 bash[23387]: cephadm 2026-03-10T13:42:22.022182+0000 mgr.a (mgr.14118) 9 : cephadm [INF] [10/Mar/2026:13:42:22] ENGINE Serving on http://192.168.123.100:8765 2026-03-10T13:43:12.342 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:12 vm08 bash[23387]: cephadm 2026-03-10T13:42:22.022182+0000 mgr.a (mgr.14118) 9 : cephadm [INF] [10/Mar/2026:13:42:22] ENGINE Serving on http://192.168.123.100:8765 2026-03-10T13:43:12.342 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:12 vm08 bash[23387]: cephadm 2026-03-10T13:42:22.022220+0000 mgr.a (mgr.14118) 10 : cephadm [INF] [10/Mar/2026:13:42:22] ENGINE Bus STARTED 2026-03-10T13:43:12.342 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:12 vm08 bash[23387]: cephadm 2026-03-10T13:42:22.022220+0000 mgr.a (mgr.14118) 10 : cephadm [INF] [10/Mar/2026:13:42:22] ENGINE Bus STARTED 2026-03-10T13:43:12.342 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:12 vm08 bash[23387]: audit 2026-03-10T13:42:22.022707+0000 mon.a (mon.0) 60 : audit [DBG] from='mgr.14118 192.168.123.100:0/997530128' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T13:43:12.342 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:12 vm08 bash[23387]: audit 2026-03-10T13:42:22.022707+0000 mon.a (mon.0) 60 : audit [DBG] from='mgr.14118 192.168.123.100:0/997530128' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T13:43:12.342 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:12 vm08 bash[23387]: audit 2026-03-10T13:42:22.130062+0000 mon.a (mon.0) 61 : audit [INF] from='mgr.14118 192.168.123.100:0/997530128' entity='mgr.a' 2026-03-10T13:43:12.342 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:12 vm08 bash[23387]: audit 2026-03-10T13:42:22.130062+0000 mon.a (mon.0) 61 : audit [INF] from='mgr.14118 192.168.123.100:0/997530128' entity='mgr.a' 2026-03-10T13:43:12.342 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:12 vm08 bash[23387]: audit 2026-03-10T13:42:22.133691+0000 mon.a (mon.0) 62 : audit [INF] from='mgr.14118 192.168.123.100:0/997530128' entity='mgr.a' 2026-03-10T13:43:12.342 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:12 vm08 bash[23387]: audit 2026-03-10T13:42:22.133691+0000 mon.a (mon.0) 62 : audit [INF] from='mgr.14118 192.168.123.100:0/997530128' entity='mgr.a' 2026-03-10T13:43:12.342 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:12 vm08 bash[23387]: audit 2026-03-10T13:42:22.111468+0000 mgr.a (mgr.14118) 11 : audit [DBG] from='client.14134 -' entity='client.admin' cmd=[{"prefix": "cephadm generate-key", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T13:43:12.342 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:12 vm08 bash[23387]: audit 2026-03-10T13:42:22.111468+0000 mgr.a (mgr.14118) 11 : audit [DBG] from='client.14134 -' entity='client.admin' cmd=[{"prefix": "cephadm generate-key", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T13:43:12.342 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:12 vm08 bash[23387]: cephadm 2026-03-10T13:42:22.111658+0000 mgr.a (mgr.14118) 12 : cephadm [INF] Generating ssh key... 2026-03-10T13:43:12.342 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:12 vm08 bash[23387]: cephadm 2026-03-10T13:42:22.111658+0000 mgr.a (mgr.14118) 12 : cephadm [INF] Generating ssh key... 2026-03-10T13:43:12.342 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:12 vm08 bash[23387]: audit 2026-03-10T13:42:22.403946+0000 mgr.a (mgr.14118) 13 : audit [DBG] from='client.14136 -' entity='client.admin' cmd=[{"prefix": "cephadm get-pub-key", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T13:43:12.342 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:12 vm08 bash[23387]: audit 2026-03-10T13:42:22.403946+0000 mgr.a (mgr.14118) 13 : audit [DBG] from='client.14136 -' entity='client.admin' cmd=[{"prefix": "cephadm get-pub-key", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T13:43:12.343 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:12 vm08 bash[23387]: audit 2026-03-10T13:42:22.656992+0000 mgr.a (mgr.14118) 14 : audit [DBG] from='client.14138 -' entity='client.admin' cmd=[{"prefix": "orch host add", "hostname": "vm00", "addr": "192.168.123.100", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T13:43:12.343 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:12 vm08 bash[23387]: audit 2026-03-10T13:42:22.656992+0000 mgr.a (mgr.14118) 14 : audit [DBG] from='client.14138 -' entity='client.admin' cmd=[{"prefix": "orch host add", "hostname": "vm00", "addr": "192.168.123.100", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T13:43:12.343 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:12 vm08 bash[23387]: cluster 2026-03-10T13:42:23.133430+0000 mon.a (mon.0) 63 : cluster [DBG] mgrmap e7: a(active, since 3s) 2026-03-10T13:43:12.343 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:12 vm08 bash[23387]: cluster 2026-03-10T13:42:23.133430+0000 mon.a (mon.0) 63 : cluster [DBG] mgrmap e7: a(active, since 3s) 2026-03-10T13:43:12.343 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:12 vm08 bash[23387]: cephadm 2026-03-10T13:42:23.224175+0000 mgr.a (mgr.14118) 15 : cephadm [INF] Deploying cephadm binary to vm00 2026-03-10T13:43:12.343 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:12 vm08 bash[23387]: cephadm 2026-03-10T13:42:23.224175+0000 mgr.a (mgr.14118) 15 : cephadm [INF] Deploying cephadm binary to vm00 2026-03-10T13:43:12.343 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:12 vm08 bash[23387]: audit 2026-03-10T13:42:24.492745+0000 mon.a (mon.0) 64 : audit [INF] from='mgr.14118 192.168.123.100:0/997530128' entity='mgr.a' 2026-03-10T13:43:12.343 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:12 vm08 bash[23387]: audit 2026-03-10T13:42:24.492745+0000 mon.a (mon.0) 64 : audit [INF] from='mgr.14118 192.168.123.100:0/997530128' entity='mgr.a' 2026-03-10T13:43:12.343 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:12 vm08 bash[23387]: cephadm 2026-03-10T13:42:24.493128+0000 mgr.a (mgr.14118) 16 : cephadm [INF] Added host vm00 2026-03-10T13:43:12.343 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:12 vm08 bash[23387]: cephadm 2026-03-10T13:42:24.493128+0000 mgr.a (mgr.14118) 16 : cephadm [INF] Added host vm00 2026-03-10T13:43:12.343 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:12 vm08 bash[23387]: audit 2026-03-10T13:42:24.495611+0000 mon.a (mon.0) 65 : audit [DBG] from='mgr.14118 192.168.123.100:0/997530128' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T13:43:12.343 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:12 vm08 bash[23387]: audit 2026-03-10T13:42:24.495611+0000 mon.a (mon.0) 65 : audit [DBG] from='mgr.14118 192.168.123.100:0/997530128' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T13:43:12.343 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:12 vm08 bash[23387]: audit 2026-03-10T13:42:24.836106+0000 mgr.a (mgr.14118) 17 : audit [DBG] from='client.14140 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mon", "unmanaged": true, "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T13:43:12.343 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:12 vm08 bash[23387]: audit 2026-03-10T13:42:24.836106+0000 mgr.a (mgr.14118) 17 : audit [DBG] from='client.14140 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mon", "unmanaged": true, "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T13:43:12.343 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:12 vm08 bash[23387]: cephadm 2026-03-10T13:42:24.837072+0000 mgr.a (mgr.14118) 18 : cephadm [INF] Saving service mon spec with placement count:5 2026-03-10T13:43:12.343 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:12 vm08 bash[23387]: cephadm 2026-03-10T13:42:24.837072+0000 mgr.a (mgr.14118) 18 : cephadm [INF] Saving service mon spec with placement count:5 2026-03-10T13:43:12.343 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:12 vm08 bash[23387]: audit 2026-03-10T13:42:24.840211+0000 mon.a (mon.0) 66 : audit [INF] from='mgr.14118 192.168.123.100:0/997530128' entity='mgr.a' 2026-03-10T13:43:12.343 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:12 vm08 bash[23387]: audit 2026-03-10T13:42:24.840211+0000 mon.a (mon.0) 66 : audit [INF] from='mgr.14118 192.168.123.100:0/997530128' entity='mgr.a' 2026-03-10T13:43:12.343 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:12 vm08 bash[23387]: audit 2026-03-10T13:42:25.105693+0000 mon.a (mon.0) 67 : audit [INF] from='mgr.14118 192.168.123.100:0/997530128' entity='mgr.a' 2026-03-10T13:43:12.343 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:12 vm08 bash[23387]: audit 2026-03-10T13:42:25.105693+0000 mon.a (mon.0) 67 : audit [INF] from='mgr.14118 192.168.123.100:0/997530128' entity='mgr.a' 2026-03-10T13:43:12.343 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:12 vm08 bash[23387]: audit 2026-03-10T13:42:25.359200+0000 mon.a (mon.0) 68 : audit [INF] from='client.? 192.168.123.100:0/1988724834' entity='client.admin' 2026-03-10T13:43:12.343 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:12 vm08 bash[23387]: audit 2026-03-10T13:42:25.359200+0000 mon.a (mon.0) 68 : audit [INF] from='client.? 192.168.123.100:0/1988724834' entity='client.admin' 2026-03-10T13:43:12.343 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:12 vm08 bash[23387]: audit 2026-03-10T13:42:25.100999+0000 mgr.a (mgr.14118) 19 : audit [DBG] from='client.14142 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mgr", "unmanaged": true, "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T13:43:12.343 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:12 vm08 bash[23387]: audit 2026-03-10T13:42:25.100999+0000 mgr.a (mgr.14118) 19 : audit [DBG] from='client.14142 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mgr", "unmanaged": true, "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T13:43:12.343 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:12 vm08 bash[23387]: cephadm 2026-03-10T13:42:25.101734+0000 mgr.a (mgr.14118) 20 : cephadm [INF] Saving service mgr spec with placement count:2 2026-03-10T13:43:12.343 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:12 vm08 bash[23387]: cephadm 2026-03-10T13:42:25.101734+0000 mgr.a (mgr.14118) 20 : cephadm [INF] Saving service mgr spec with placement count:2 2026-03-10T13:43:12.343 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:12 vm08 bash[23387]: audit 2026-03-10T13:42:25.644193+0000 mon.a (mon.0) 69 : audit [INF] from='client.? 192.168.123.100:0/4183873511' entity='client.admin' 2026-03-10T13:43:12.343 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:12 vm08 bash[23387]: audit 2026-03-10T13:42:25.644193+0000 mon.a (mon.0) 69 : audit [INF] from='client.? 192.168.123.100:0/4183873511' entity='client.admin' 2026-03-10T13:43:12.343 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:12 vm08 bash[23387]: audit 2026-03-10T13:42:25.954844+0000 mon.a (mon.0) 70 : audit [INF] from='mgr.14118 192.168.123.100:0/997530128' entity='mgr.a' 2026-03-10T13:43:12.343 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:12 vm08 bash[23387]: audit 2026-03-10T13:42:25.954844+0000 mon.a (mon.0) 70 : audit [INF] from='mgr.14118 192.168.123.100:0/997530128' entity='mgr.a' 2026-03-10T13:43:12.343 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:12 vm08 bash[23387]: audit 2026-03-10T13:42:26.017557+0000 mon.a (mon.0) 71 : audit [INF] from='client.? 192.168.123.100:0/3089668869' entity='client.admin' cmd=[{"prefix": "mgr module enable", "module": "dashboard"}]: dispatch 2026-03-10T13:43:12.343 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:12 vm08 bash[23387]: audit 2026-03-10T13:42:26.017557+0000 mon.a (mon.0) 71 : audit [INF] from='client.? 192.168.123.100:0/3089668869' entity='client.admin' cmd=[{"prefix": "mgr module enable", "module": "dashboard"}]: dispatch 2026-03-10T13:43:12.343 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:12 vm08 bash[23387]: audit 2026-03-10T13:42:26.261673+0000 mon.a (mon.0) 72 : audit [INF] from='mgr.14118 192.168.123.100:0/997530128' entity='mgr.a' 2026-03-10T13:43:12.343 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:12 vm08 bash[23387]: audit 2026-03-10T13:42:26.261673+0000 mon.a (mon.0) 72 : audit [INF] from='mgr.14118 192.168.123.100:0/997530128' entity='mgr.a' 2026-03-10T13:43:12.343 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:12 vm08 bash[23387]: audit 2026-03-10T13:42:26.955756+0000 mon.a (mon.0) 73 : audit [INF] from='client.? 192.168.123.100:0/3089668869' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "dashboard"}]': finished 2026-03-10T13:43:12.343 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:12 vm08 bash[23387]: audit 2026-03-10T13:42:26.955756+0000 mon.a (mon.0) 73 : audit [INF] from='client.? 192.168.123.100:0/3089668869' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "dashboard"}]': finished 2026-03-10T13:43:12.343 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:12 vm08 bash[23387]: cluster 2026-03-10T13:42:26.958843+0000 mon.a (mon.0) 74 : cluster [DBG] mgrmap e8: a(active, since 6s) 2026-03-10T13:43:12.343 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:12 vm08 bash[23387]: cluster 2026-03-10T13:42:26.958843+0000 mon.a (mon.0) 74 : cluster [DBG] mgrmap e8: a(active, since 6s) 2026-03-10T13:43:12.343 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:12 vm08 bash[23387]: audit 2026-03-10T13:42:27.313786+0000 mon.a (mon.0) 75 : audit [DBG] from='client.? 192.168.123.100:0/463782601' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch 2026-03-10T13:43:12.343 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:12 vm08 bash[23387]: audit 2026-03-10T13:42:27.313786+0000 mon.a (mon.0) 75 : audit [DBG] from='client.? 192.168.123.100:0/463782601' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch 2026-03-10T13:43:12.343 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:12 vm08 bash[23387]: cluster 2026-03-10T13:42:30.190698+0000 mon.a (mon.0) 76 : cluster [INF] Active manager daemon a restarted 2026-03-10T13:43:12.343 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:12 vm08 bash[23387]: cluster 2026-03-10T13:42:30.190698+0000 mon.a (mon.0) 76 : cluster [INF] Active manager daemon a restarted 2026-03-10T13:43:12.343 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:12 vm08 bash[23387]: cluster 2026-03-10T13:42:30.190953+0000 mon.a (mon.0) 77 : cluster [INF] Activating manager daemon a 2026-03-10T13:43:12.343 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:12 vm08 bash[23387]: cluster 2026-03-10T13:42:30.190953+0000 mon.a (mon.0) 77 : cluster [INF] Activating manager daemon a 2026-03-10T13:43:12.343 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:12 vm08 bash[23387]: cluster 2026-03-10T13:42:30.195647+0000 mon.a (mon.0) 78 : cluster [DBG] osdmap e3: 0 total, 0 up, 0 in 2026-03-10T13:43:12.343 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:12 vm08 bash[23387]: cluster 2026-03-10T13:42:30.195647+0000 mon.a (mon.0) 78 : cluster [DBG] osdmap e3: 0 total, 0 up, 0 in 2026-03-10T13:43:12.343 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:12 vm08 bash[23387]: cluster 2026-03-10T13:42:30.195782+0000 mon.a (mon.0) 79 : cluster [DBG] mgrmap e9: a(active, starting, since 0.00494311s) 2026-03-10T13:43:12.343 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:12 vm08 bash[23387]: cluster 2026-03-10T13:42:30.195782+0000 mon.a (mon.0) 79 : cluster [DBG] mgrmap e9: a(active, starting, since 0.00494311s) 2026-03-10T13:43:12.343 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:12 vm08 bash[23387]: audit 2026-03-10T13:42:30.198012+0000 mon.a (mon.0) 80 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-10T13:43:12.343 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:12 vm08 bash[23387]: audit 2026-03-10T13:42:30.198012+0000 mon.a (mon.0) 80 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-10T13:43:12.343 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:12 vm08 bash[23387]: audit 2026-03-10T13:42:30.198780+0000 mon.a (mon.0) 81 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "mgr metadata", "who": "a", "id": "a"}]: dispatch 2026-03-10T13:43:12.343 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:12 vm08 bash[23387]: audit 2026-03-10T13:42:30.198780+0000 mon.a (mon.0) 81 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "mgr metadata", "who": "a", "id": "a"}]: dispatch 2026-03-10T13:43:12.343 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:12 vm08 bash[23387]: audit 2026-03-10T13:42:30.199459+0000 mon.a (mon.0) 82 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "mds metadata"}]: dispatch 2026-03-10T13:43:12.343 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:12 vm08 bash[23387]: audit 2026-03-10T13:42:30.199459+0000 mon.a (mon.0) 82 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "mds metadata"}]: dispatch 2026-03-10T13:43:12.343 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:12 vm08 bash[23387]: audit 2026-03-10T13:42:30.199564+0000 mon.a (mon.0) 83 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "osd metadata"}]: dispatch 2026-03-10T13:43:12.343 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:12 vm08 bash[23387]: audit 2026-03-10T13:42:30.199564+0000 mon.a (mon.0) 83 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "osd metadata"}]: dispatch 2026-03-10T13:43:12.343 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:12 vm08 bash[23387]: audit 2026-03-10T13:42:30.199657+0000 mon.a (mon.0) 84 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "mon metadata"}]: dispatch 2026-03-10T13:43:12.343 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:12 vm08 bash[23387]: audit 2026-03-10T13:42:30.199657+0000 mon.a (mon.0) 84 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "mon metadata"}]: dispatch 2026-03-10T13:43:12.343 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:12 vm08 bash[23387]: cluster 2026-03-10T13:42:30.204993+0000 mon.a (mon.0) 85 : cluster [INF] Manager daemon a is now available 2026-03-10T13:43:12.343 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:12 vm08 bash[23387]: cluster 2026-03-10T13:42:30.204993+0000 mon.a (mon.0) 85 : cluster [INF] Manager daemon a is now available 2026-03-10T13:43:12.343 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:12 vm08 bash[23387]: audit 2026-03-10T13:42:30.220832+0000 mon.a (mon.0) 86 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/a/mirror_snapshot_schedule"}]: dispatch 2026-03-10T13:43:12.343 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:12 vm08 bash[23387]: audit 2026-03-10T13:42:30.220832+0000 mon.a (mon.0) 86 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/a/mirror_snapshot_schedule"}]: dispatch 2026-03-10T13:43:12.343 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:12 vm08 bash[23387]: audit 2026-03-10T13:42:30.223605+0000 mon.a (mon.0) 87 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T13:43:12.343 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:12 vm08 bash[23387]: audit 2026-03-10T13:42:30.223605+0000 mon.a (mon.0) 87 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T13:43:12.344 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:12 vm08 bash[23387]: audit 2026-03-10T13:42:30.233988+0000 mon.a (mon.0) 88 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/a/trash_purge_schedule"}]: dispatch 2026-03-10T13:43:12.344 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:12 vm08 bash[23387]: audit 2026-03-10T13:42:30.233988+0000 mon.a (mon.0) 88 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/a/trash_purge_schedule"}]: dispatch 2026-03-10T13:43:12.344 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:12 vm08 bash[23387]: cephadm 2026-03-10T13:42:31.136510+0000 mgr.a (mgr.14150) 1 : cephadm [INF] [10/Mar/2026:13:42:31] ENGINE Bus STARTING 2026-03-10T13:43:12.344 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:12 vm08 bash[23387]: cephadm 2026-03-10T13:42:31.136510+0000 mgr.a (mgr.14150) 1 : cephadm [INF] [10/Mar/2026:13:42:31] ENGINE Bus STARTING 2026-03-10T13:43:12.344 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:12 vm08 bash[23387]: cluster 2026-03-10T13:42:31.199043+0000 mon.a (mon.0) 89 : cluster [DBG] mgrmap e10: a(active, since 1.0082s) 2026-03-10T13:43:12.344 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:12 vm08 bash[23387]: cluster 2026-03-10T13:42:31.199043+0000 mon.a (mon.0) 89 : cluster [DBG] mgrmap e10: a(active, since 1.0082s) 2026-03-10T13:43:12.344 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:12 vm08 bash[23387]: audit 2026-03-10T13:42:31.201124+0000 mgr.a (mgr.14150) 2 : audit [DBG] from='client.14154 -' entity='client.admin' cmd=[{"prefix": "get_command_descriptions"}]: dispatch 2026-03-10T13:43:12.344 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:12 vm08 bash[23387]: audit 2026-03-10T13:42:31.201124+0000 mgr.a (mgr.14150) 2 : audit [DBG] from='client.14154 -' entity='client.admin' cmd=[{"prefix": "get_command_descriptions"}]: dispatch 2026-03-10T13:43:12.344 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:12 vm08 bash[23387]: audit 2026-03-10T13:42:31.204773+0000 mgr.a (mgr.14150) 3 : audit [DBG] from='client.14154 -' entity='client.admin' cmd=[{"prefix": "mgr_status"}]: dispatch 2026-03-10T13:43:12.344 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:12 vm08 bash[23387]: audit 2026-03-10T13:42:31.204773+0000 mgr.a (mgr.14150) 3 : audit [DBG] from='client.14154 -' entity='client.admin' cmd=[{"prefix": "mgr_status"}]: dispatch 2026-03-10T13:43:12.344 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:12 vm08 bash[23387]: cephadm 2026-03-10T13:42:31.244346+0000 mgr.a (mgr.14150) 4 : cephadm [INF] [10/Mar/2026:13:42:31] ENGINE Serving on https://192.168.123.100:7150 2026-03-10T13:43:12.344 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:12 vm08 bash[23387]: cephadm 2026-03-10T13:42:31.244346+0000 mgr.a (mgr.14150) 4 : cephadm [INF] [10/Mar/2026:13:42:31] ENGINE Serving on https://192.168.123.100:7150 2026-03-10T13:43:12.344 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:12 vm08 bash[23387]: cephadm 2026-03-10T13:42:31.246624+0000 mgr.a (mgr.14150) 5 : cephadm [INF] [10/Mar/2026:13:42:31] ENGINE Client ('192.168.123.100', 51908) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)') 2026-03-10T13:43:12.344 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:12 vm08 bash[23387]: cephadm 2026-03-10T13:42:31.246624+0000 mgr.a (mgr.14150) 5 : cephadm [INF] [10/Mar/2026:13:42:31] ENGINE Client ('192.168.123.100', 51908) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)') 2026-03-10T13:43:12.344 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:12 vm08 bash[23387]: cephadm 2026-03-10T13:42:31.246774+0000 mgr.a (mgr.14150) 6 : cephadm [INF] [10/Mar/2026:13:42:31] ENGINE Serving on http://192.168.123.100:8765 2026-03-10T13:43:12.344 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:12 vm08 bash[23387]: cephadm 2026-03-10T13:42:31.246774+0000 mgr.a (mgr.14150) 6 : cephadm [INF] [10/Mar/2026:13:42:31] ENGINE Serving on http://192.168.123.100:8765 2026-03-10T13:43:12.344 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:12 vm08 bash[23387]: cephadm 2026-03-10T13:42:31.247014+0000 mgr.a (mgr.14150) 7 : cephadm [INF] [10/Mar/2026:13:42:31] ENGINE Bus STARTED 2026-03-10T13:43:12.344 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:12 vm08 bash[23387]: cephadm 2026-03-10T13:42:31.247014+0000 mgr.a (mgr.14150) 7 : cephadm [INF] [10/Mar/2026:13:42:31] ENGINE Bus STARTED 2026-03-10T13:43:12.344 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:12 vm08 bash[23387]: audit 2026-03-10T13:42:31.463525+0000 mgr.a (mgr.14150) 8 : audit [DBG] from='client.14162 -' entity='client.admin' cmd=[{"prefix": "dashboard create-self-signed-cert", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T13:43:12.344 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:12 vm08 bash[23387]: audit 2026-03-10T13:42:31.463525+0000 mgr.a (mgr.14150) 8 : audit [DBG] from='client.14162 -' entity='client.admin' cmd=[{"prefix": "dashboard create-self-signed-cert", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T13:43:12.344 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:12 vm08 bash[23387]: audit 2026-03-10T13:42:31.498941+0000 mon.a (mon.0) 90 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:43:12.344 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:12 vm08 bash[23387]: audit 2026-03-10T13:42:31.498941+0000 mon.a (mon.0) 90 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:43:12.344 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:12 vm08 bash[23387]: audit 2026-03-10T13:42:31.501363+0000 mon.a (mon.0) 91 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:43:12.344 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:12 vm08 bash[23387]: audit 2026-03-10T13:42:31.501363+0000 mon.a (mon.0) 91 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:43:12.344 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:12 vm08 bash[23387]: audit 2026-03-10T13:42:31.771492+0000 mgr.a (mgr.14150) 9 : audit [DBG] from='client.14164 -' entity='client.admin' cmd=[{"prefix": "dashboard ac-user-create", "username": "admin", "rolename": "administrator", "force_password": true, "pwd_update_required": true, "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T13:43:12.344 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:12 vm08 bash[23387]: audit 2026-03-10T13:42:31.771492+0000 mgr.a (mgr.14150) 9 : audit [DBG] from='client.14164 -' entity='client.admin' cmd=[{"prefix": "dashboard ac-user-create", "username": "admin", "rolename": "administrator", "force_password": true, "pwd_update_required": true, "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T13:43:12.344 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:12 vm08 bash[23387]: audit 2026-03-10T13:42:31.922633+0000 mon.a (mon.0) 92 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:43:12.344 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:12 vm08 bash[23387]: audit 2026-03-10T13:42:31.922633+0000 mon.a (mon.0) 92 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:43:12.344 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:12 vm08 bash[23387]: audit 2026-03-10T13:42:32.174796+0000 mon.a (mon.0) 93 : audit [DBG] from='client.? 192.168.123.100:0/1131920560' entity='client.admin' cmd=[{"prefix": "config get", "who": "mgr", "key": "mgr/dashboard/ssl_server_port"}]: dispatch 2026-03-10T13:43:12.344 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:12 vm08 bash[23387]: audit 2026-03-10T13:42:32.174796+0000 mon.a (mon.0) 93 : audit [DBG] from='client.? 192.168.123.100:0/1131920560' entity='client.admin' cmd=[{"prefix": "config get", "who": "mgr", "key": "mgr/dashboard/ssl_server_port"}]: dispatch 2026-03-10T13:43:12.344 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:12 vm08 bash[23387]: audit 2026-03-10T13:42:32.497645+0000 mon.a (mon.0) 94 : audit [INF] from='client.? 192.168.123.100:0/3905016985' entity='client.admin' 2026-03-10T13:43:12.344 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:12 vm08 bash[23387]: audit 2026-03-10T13:42:32.497645+0000 mon.a (mon.0) 94 : audit [INF] from='client.? 192.168.123.100:0/3905016985' entity='client.admin' 2026-03-10T13:43:12.344 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:12 vm08 bash[23387]: cluster 2026-03-10T13:42:32.926535+0000 mon.a (mon.0) 95 : cluster [DBG] mgrmap e11: a(active, since 2s) 2026-03-10T13:43:12.344 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:12 vm08 bash[23387]: cluster 2026-03-10T13:42:32.926535+0000 mon.a (mon.0) 95 : cluster [DBG] mgrmap e11: a(active, since 2s) 2026-03-10T13:43:12.344 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:12 vm08 bash[23387]: audit 2026-03-10T13:42:35.186165+0000 mon.a (mon.0) 96 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:43:12.344 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:12 vm08 bash[23387]: audit 2026-03-10T13:42:35.186165+0000 mon.a (mon.0) 96 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:43:12.344 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:12 vm08 bash[23387]: audit 2026-03-10T13:42:35.744748+0000 mon.a (mon.0) 97 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:43:12.344 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:12 vm08 bash[23387]: audit 2026-03-10T13:42:35.744748+0000 mon.a (mon.0) 97 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:43:12.344 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:12 vm08 bash[23387]: cluster 2026-03-10T13:42:37.190410+0000 mon.a (mon.0) 98 : cluster [DBG] mgrmap e12: a(active, since 6s) 2026-03-10T13:43:12.344 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:12 vm08 bash[23387]: cluster 2026-03-10T13:42:37.190410+0000 mon.a (mon.0) 98 : cluster [DBG] mgrmap e12: a(active, since 6s) 2026-03-10T13:43:12.344 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:12 vm08 bash[23387]: audit 2026-03-10T13:42:37.311115+0000 mon.a (mon.0) 99 : audit [INF] from='client.? 192.168.123.100:0/1747826160' entity='client.admin' 2026-03-10T13:43:12.344 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:12 vm08 bash[23387]: audit 2026-03-10T13:42:37.311115+0000 mon.a (mon.0) 99 : audit [INF] from='client.? 192.168.123.100:0/1747826160' entity='client.admin' 2026-03-10T13:43:12.344 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:12 vm08 bash[23387]: audit 2026-03-10T13:42:41.315246+0000 mgr.a (mgr.14150) 10 : audit [DBG] from='client.14172 -' entity='client.admin' cmd=[{"prefix": "orch client-keyring set", "entity": "client.admin", "placement": "*", "mode": "0755", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T13:43:12.344 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:12 vm08 bash[23387]: audit 2026-03-10T13:42:41.315246+0000 mgr.a (mgr.14150) 10 : audit [DBG] from='client.14172 -' entity='client.admin' cmd=[{"prefix": "orch client-keyring set", "entity": "client.admin", "placement": "*", "mode": "0755", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T13:43:12.344 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:12 vm08 bash[23387]: audit 2026-03-10T13:42:41.318139+0000 mon.a (mon.0) 100 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:43:12.344 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:12 vm08 bash[23387]: audit 2026-03-10T13:42:41.318139+0000 mon.a (mon.0) 100 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:43:12.344 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:12 vm08 bash[23387]: audit 2026-03-10T13:42:41.854952+0000 mon.a (mon.0) 101 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:43:12.344 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:12 vm08 bash[23387]: audit 2026-03-10T13:42:41.854952+0000 mon.a (mon.0) 101 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:43:12.344 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:12 vm08 bash[23387]: audit 2026-03-10T13:42:41.857035+0000 mon.a (mon.0) 102 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:43:12.344 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:12 vm08 bash[23387]: audit 2026-03-10T13:42:41.857035+0000 mon.a (mon.0) 102 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:43:12.344 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:12 vm08 bash[23387]: audit 2026-03-10T13:42:41.857616+0000 mon.a (mon.0) 103 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "config rm", "who": "osd/host:vm00", "name": "osd_memory_target"}]: dispatch 2026-03-10T13:43:12.344 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:12 vm08 bash[23387]: audit 2026-03-10T13:42:41.857616+0000 mon.a (mon.0) 103 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "config rm", "who": "osd/host:vm00", "name": "osd_memory_target"}]: dispatch 2026-03-10T13:43:12.344 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:12 vm08 bash[23387]: audit 2026-03-10T13:42:41.858418+0000 mon.a (mon.0) 104 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T13:43:12.344 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:12 vm08 bash[23387]: audit 2026-03-10T13:42:41.858418+0000 mon.a (mon.0) 104 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T13:43:12.344 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:12 vm08 bash[23387]: audit 2026-03-10T13:42:41.858992+0000 mon.a (mon.0) 105 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T13:43:12.344 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:12 vm08 bash[23387]: audit 2026-03-10T13:42:41.858992+0000 mon.a (mon.0) 105 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T13:43:12.344 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:12 vm08 bash[23387]: cephadm 2026-03-10T13:42:41.859760+0000 mgr.a (mgr.14150) 11 : cephadm [INF] Updating vm00:/etc/ceph/ceph.conf 2026-03-10T13:43:12.344 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:12 vm08 bash[23387]: cephadm 2026-03-10T13:42:41.859760+0000 mgr.a (mgr.14150) 11 : cephadm [INF] Updating vm00:/etc/ceph/ceph.conf 2026-03-10T13:43:12.344 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:12 vm08 bash[23387]: cephadm 2026-03-10T13:42:41.888501+0000 mgr.a (mgr.14150) 12 : cephadm [INF] Updating vm00:/var/lib/ceph/c9620084-1c86-11f1-bcc5-e3fb709eab0a/config/ceph.conf 2026-03-10T13:43:12.344 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:12 vm08 bash[23387]: cephadm 2026-03-10T13:42:41.888501+0000 mgr.a (mgr.14150) 12 : cephadm [INF] Updating vm00:/var/lib/ceph/c9620084-1c86-11f1-bcc5-e3fb709eab0a/config/ceph.conf 2026-03-10T13:43:12.344 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:12 vm08 bash[23387]: cephadm 2026-03-10T13:42:41.916024+0000 mgr.a (mgr.14150) 13 : cephadm [INF] Updating vm00:/etc/ceph/ceph.client.admin.keyring 2026-03-10T13:43:12.344 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:12 vm08 bash[23387]: cephadm 2026-03-10T13:42:41.916024+0000 mgr.a (mgr.14150) 13 : cephadm [INF] Updating vm00:/etc/ceph/ceph.client.admin.keyring 2026-03-10T13:43:12.344 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:12 vm08 bash[23387]: cephadm 2026-03-10T13:42:41.945502+0000 mgr.a (mgr.14150) 14 : cephadm [INF] Updating vm00:/var/lib/ceph/c9620084-1c86-11f1-bcc5-e3fb709eab0a/config/ceph.client.admin.keyring 2026-03-10T13:43:12.344 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:12 vm08 bash[23387]: cephadm 2026-03-10T13:42:41.945502+0000 mgr.a (mgr.14150) 14 : cephadm [INF] Updating vm00:/var/lib/ceph/c9620084-1c86-11f1-bcc5-e3fb709eab0a/config/ceph.client.admin.keyring 2026-03-10T13:43:12.344 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:12 vm08 bash[23387]: audit 2026-03-10T13:42:41.978435+0000 mon.a (mon.0) 106 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:43:12.345 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:12 vm08 bash[23387]: audit 2026-03-10T13:42:41.978435+0000 mon.a (mon.0) 106 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:43:12.345 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:12 vm08 bash[23387]: audit 2026-03-10T13:42:41.980695+0000 mon.a (mon.0) 107 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:43:12.345 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:12 vm08 bash[23387]: audit 2026-03-10T13:42:41.980695+0000 mon.a (mon.0) 107 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:43:12.345 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:12 vm08 bash[23387]: audit 2026-03-10T13:42:41.982772+0000 mon.a (mon.0) 108 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:43:12.345 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:12 vm08 bash[23387]: audit 2026-03-10T13:42:41.982772+0000 mon.a (mon.0) 108 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:43:12.345 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:12 vm08 bash[23387]: audit 2026-03-10T13:42:41.987647+0000 mon.a (mon.0) 109 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T13:43:12.345 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:12 vm08 bash[23387]: audit 2026-03-10T13:42:41.987647+0000 mon.a (mon.0) 109 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T13:43:12.345 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:12 vm08 bash[23387]: audit 2026-03-10T13:42:41.988592+0000 mon.a (mon.0) 110 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T13:43:12.345 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:12 vm08 bash[23387]: audit 2026-03-10T13:42:41.988592+0000 mon.a (mon.0) 110 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T13:43:12.345 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:12 vm08 bash[23387]: audit 2026-03-10T13:42:41.989181+0000 mon.a (mon.0) 111 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T13:43:12.345 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:12 vm08 bash[23387]: audit 2026-03-10T13:42:41.989181+0000 mon.a (mon.0) 111 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T13:43:12.345 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:12 vm08 bash[23387]: audit 2026-03-10T13:42:41.991443+0000 mon.a (mon.0) 112 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:43:12.345 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:12 vm08 bash[23387]: audit 2026-03-10T13:42:41.991443+0000 mon.a (mon.0) 112 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:43:12.345 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:12 vm08 bash[23387]: audit 2026-03-10T13:42:45.365661+0000 mgr.a (mgr.14150) 15 : audit [DBG] from='client.14174 -' entity='client.admin' cmd=[{"prefix": "orch host add", "hostname": "vm07", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T13:43:12.345 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:12 vm08 bash[23387]: audit 2026-03-10T13:42:45.365661+0000 mgr.a (mgr.14150) 15 : audit [DBG] from='client.14174 -' entity='client.admin' cmd=[{"prefix": "orch host add", "hostname": "vm07", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T13:43:12.345 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:12 vm08 bash[23387]: cephadm 2026-03-10T13:42:45.912254+0000 mgr.a (mgr.14150) 16 : cephadm [INF] Deploying cephadm binary to vm07 2026-03-10T13:43:12.345 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:12 vm08 bash[23387]: cephadm 2026-03-10T13:42:45.912254+0000 mgr.a (mgr.14150) 16 : cephadm [INF] Deploying cephadm binary to vm07 2026-03-10T13:43:12.345 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:12 vm08 bash[23387]: audit 2026-03-10T13:42:47.128661+0000 mon.a (mon.0) 113 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:43:12.345 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:12 vm08 bash[23387]: audit 2026-03-10T13:42:47.128661+0000 mon.a (mon.0) 113 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:43:12.345 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:12 vm08 bash[23387]: cephadm 2026-03-10T13:42:47.129218+0000 mgr.a (mgr.14150) 17 : cephadm [INF] Added host vm07 2026-03-10T13:43:12.345 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:12 vm08 bash[23387]: cephadm 2026-03-10T13:42:47.129218+0000 mgr.a (mgr.14150) 17 : cephadm [INF] Added host vm07 2026-03-10T13:43:12.345 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:12 vm08 bash[23387]: audit 2026-03-10T13:42:47.129587+0000 mon.a (mon.0) 114 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T13:43:12.345 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:12 vm08 bash[23387]: audit 2026-03-10T13:42:47.129587+0000 mon.a (mon.0) 114 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T13:43:12.345 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:12 vm08 bash[23387]: audit 2026-03-10T13:42:47.420386+0000 mon.a (mon.0) 115 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:43:12.345 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:12 vm08 bash[23387]: audit 2026-03-10T13:42:47.420386+0000 mon.a (mon.0) 115 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:43:12.345 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:12 vm08 bash[23387]: audit 2026-03-10T13:42:48.692533+0000 mon.a (mon.0) 116 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:43:12.345 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:12 vm08 bash[23387]: audit 2026-03-10T13:42:48.692533+0000 mon.a (mon.0) 116 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:43:12.345 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:12 vm08 bash[23387]: audit 2026-03-10T13:42:49.236571+0000 mon.a (mon.0) 117 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:43:12.345 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:12 vm08 bash[23387]: audit 2026-03-10T13:42:49.236571+0000 mon.a (mon.0) 117 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:43:12.345 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:12 vm08 bash[23387]: cluster 2026-03-10T13:42:50.200477+0000 mgr.a (mgr.14150) 18 : cluster [DBG] pgmap v3: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T13:43:12.345 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:12 vm08 bash[23387]: cluster 2026-03-10T13:42:50.200477+0000 mgr.a (mgr.14150) 18 : cluster [DBG] pgmap v3: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T13:43:12.345 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:12 vm08 bash[23387]: cluster 2026-03-10T13:42:52.200649+0000 mgr.a (mgr.14150) 19 : cluster [DBG] pgmap v4: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T13:43:12.345 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:12 vm08 bash[23387]: cluster 2026-03-10T13:42:52.200649+0000 mgr.a (mgr.14150) 19 : cluster [DBG] pgmap v4: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T13:43:12.345 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:12 vm08 bash[23387]: audit 2026-03-10T13:42:52.650972+0000 mon.a (mon.0) 118 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:43:12.345 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:12 vm08 bash[23387]: audit 2026-03-10T13:42:52.650972+0000 mon.a (mon.0) 118 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:43:12.345 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:12 vm08 bash[23387]: audit 2026-03-10T13:42:52.655554+0000 mon.a (mon.0) 119 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:43:12.345 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:12 vm08 bash[23387]: audit 2026-03-10T13:42:52.655554+0000 mon.a (mon.0) 119 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:43:12.345 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:12 vm08 bash[23387]: audit 2026-03-10T13:42:52.660012+0000 mon.a (mon.0) 120 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:43:12.345 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:12 vm08 bash[23387]: audit 2026-03-10T13:42:52.660012+0000 mon.a (mon.0) 120 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:43:12.345 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:12 vm08 bash[23387]: audit 2026-03-10T13:42:52.663245+0000 mon.a (mon.0) 121 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:43:12.345 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:12 vm08 bash[23387]: audit 2026-03-10T13:42:52.663245+0000 mon.a (mon.0) 121 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:43:12.345 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:12 vm08 bash[23387]: audit 2026-03-10T13:42:52.663931+0000 mon.a (mon.0) 122 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "config rm", "who": "osd/host:vm07", "name": "osd_memory_target"}]: dispatch 2026-03-10T13:43:12.345 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:12 vm08 bash[23387]: audit 2026-03-10T13:42:52.663931+0000 mon.a (mon.0) 122 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "config rm", "who": "osd/host:vm07", "name": "osd_memory_target"}]: dispatch 2026-03-10T13:43:12.345 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:12 vm08 bash[23387]: audit 2026-03-10T13:42:52.664567+0000 mon.a (mon.0) 123 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T13:43:12.345 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:12 vm08 bash[23387]: audit 2026-03-10T13:42:52.664567+0000 mon.a (mon.0) 123 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T13:43:12.345 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:12 vm08 bash[23387]: audit 2026-03-10T13:42:52.664934+0000 mon.a (mon.0) 124 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T13:43:12.345 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:12 vm08 bash[23387]: audit 2026-03-10T13:42:52.664934+0000 mon.a (mon.0) 124 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T13:43:12.345 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:12 vm08 bash[23387]: cephadm 2026-03-10T13:42:52.665512+0000 mgr.a (mgr.14150) 20 : cephadm [INF] Updating vm07:/etc/ceph/ceph.conf 2026-03-10T13:43:12.345 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:12 vm08 bash[23387]: cephadm 2026-03-10T13:42:52.665512+0000 mgr.a (mgr.14150) 20 : cephadm [INF] Updating vm07:/etc/ceph/ceph.conf 2026-03-10T13:43:12.345 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:12 vm08 bash[23387]: cephadm 2026-03-10T13:42:52.708110+0000 mgr.a (mgr.14150) 21 : cephadm [INF] Updating vm07:/var/lib/ceph/c9620084-1c86-11f1-bcc5-e3fb709eab0a/config/ceph.conf 2026-03-10T13:43:12.345 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:12 vm08 bash[23387]: cephadm 2026-03-10T13:42:52.708110+0000 mgr.a (mgr.14150) 21 : cephadm [INF] Updating vm07:/var/lib/ceph/c9620084-1c86-11f1-bcc5-e3fb709eab0a/config/ceph.conf 2026-03-10T13:43:12.345 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:12 vm08 bash[23387]: cephadm 2026-03-10T13:42:52.743735+0000 mgr.a (mgr.14150) 22 : cephadm [INF] Updating vm07:/etc/ceph/ceph.client.admin.keyring 2026-03-10T13:43:12.345 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:12 vm08 bash[23387]: cephadm 2026-03-10T13:42:52.743735+0000 mgr.a (mgr.14150) 22 : cephadm [INF] Updating vm07:/etc/ceph/ceph.client.admin.keyring 2026-03-10T13:43:12.345 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:12 vm08 bash[23387]: cephadm 2026-03-10T13:42:52.779493+0000 mgr.a (mgr.14150) 23 : cephadm [INF] Updating vm07:/var/lib/ceph/c9620084-1c86-11f1-bcc5-e3fb709eab0a/config/ceph.client.admin.keyring 2026-03-10T13:43:12.345 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:12 vm08 bash[23387]: cephadm 2026-03-10T13:42:52.779493+0000 mgr.a (mgr.14150) 23 : cephadm [INF] Updating vm07:/var/lib/ceph/c9620084-1c86-11f1-bcc5-e3fb709eab0a/config/ceph.client.admin.keyring 2026-03-10T13:43:12.345 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:12 vm08 bash[23387]: audit 2026-03-10T13:42:52.818915+0000 mon.a (mon.0) 125 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:43:12.345 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:12 vm08 bash[23387]: audit 2026-03-10T13:42:52.818915+0000 mon.a (mon.0) 125 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:43:12.345 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:12 vm08 bash[23387]: audit 2026-03-10T13:42:52.821101+0000 mon.a (mon.0) 126 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:43:12.345 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:12 vm08 bash[23387]: audit 2026-03-10T13:42:52.821101+0000 mon.a (mon.0) 126 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:43:12.345 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:12 vm08 bash[23387]: audit 2026-03-10T13:42:52.823223+0000 mon.a (mon.0) 127 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:43:12.345 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:12 vm08 bash[23387]: audit 2026-03-10T13:42:52.823223+0000 mon.a (mon.0) 127 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:43:12.345 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:12 vm08 bash[23387]: audit 2026-03-10T13:42:52.936512+0000 mgr.a (mgr.14150) 24 : audit [DBG] from='client.14176 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-10T13:43:12.345 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:12 vm08 bash[23387]: audit 2026-03-10T13:42:52.936512+0000 mgr.a (mgr.14150) 24 : audit [DBG] from='client.14176 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-10T13:43:12.345 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:12 vm08 bash[23387]: cluster 2026-03-10T13:42:54.200828+0000 mgr.a (mgr.14150) 25 : cluster [DBG] pgmap v5: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T13:43:12.346 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:12 vm08 bash[23387]: cluster 2026-03-10T13:42:54.200828+0000 mgr.a (mgr.14150) 25 : cluster [DBG] pgmap v5: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T13:43:12.346 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:12 vm08 bash[23387]: cluster 2026-03-10T13:42:56.200989+0000 mgr.a (mgr.14150) 26 : cluster [DBG] pgmap v6: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T13:43:12.346 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:12 vm08 bash[23387]: cluster 2026-03-10T13:42:56.200989+0000 mgr.a (mgr.14150) 26 : cluster [DBG] pgmap v6: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T13:43:12.346 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:12 vm08 bash[23387]: audit 2026-03-10T13:42:57.964245+0000 mgr.a (mgr.14150) 27 : audit [DBG] from='client.14178 -' entity='client.admin' cmd=[{"prefix": "orch host add", "hostname": "vm08", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T13:43:12.346 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:12 vm08 bash[23387]: audit 2026-03-10T13:42:57.964245+0000 mgr.a (mgr.14150) 27 : audit [DBG] from='client.14178 -' entity='client.admin' cmd=[{"prefix": "orch host add", "hostname": "vm08", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T13:43:12.346 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:12 vm08 bash[23387]: cluster 2026-03-10T13:42:58.201190+0000 mgr.a (mgr.14150) 28 : cluster [DBG] pgmap v7: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T13:43:12.346 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:12 vm08 bash[23387]: cluster 2026-03-10T13:42:58.201190+0000 mgr.a (mgr.14150) 28 : cluster [DBG] pgmap v7: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T13:43:12.346 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:12 vm08 bash[23387]: cephadm 2026-03-10T13:42:58.468705+0000 mgr.a (mgr.14150) 29 : cephadm [INF] Deploying cephadm binary to vm08 2026-03-10T13:43:12.346 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:12 vm08 bash[23387]: cephadm 2026-03-10T13:42:58.468705+0000 mgr.a (mgr.14150) 29 : cephadm [INF] Deploying cephadm binary to vm08 2026-03-10T13:43:12.346 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:12 vm08 bash[23387]: audit 2026-03-10T13:42:59.691914+0000 mon.a (mon.0) 128 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:43:12.346 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:12 vm08 bash[23387]: audit 2026-03-10T13:42:59.691914+0000 mon.a (mon.0) 128 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:43:12.346 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:12 vm08 bash[23387]: audit 2026-03-10T13:42:59.692436+0000 mon.a (mon.0) 129 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T13:43:12.346 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:12 vm08 bash[23387]: audit 2026-03-10T13:42:59.692436+0000 mon.a (mon.0) 129 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T13:43:12.346 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:12 vm08 bash[23387]: cephadm 2026-03-10T13:42:59.692207+0000 mgr.a (mgr.14150) 30 : cephadm [INF] Added host vm08 2026-03-10T13:43:12.346 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:12 vm08 bash[23387]: cephadm 2026-03-10T13:42:59.692207+0000 mgr.a (mgr.14150) 30 : cephadm [INF] Added host vm08 2026-03-10T13:43:12.346 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:12 vm08 bash[23387]: audit 2026-03-10T13:43:00.193532+0000 mon.a (mon.0) 130 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:43:12.346 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:12 vm08 bash[23387]: audit 2026-03-10T13:43:00.193532+0000 mon.a (mon.0) 130 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:43:12.346 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:12 vm08 bash[23387]: cluster 2026-03-10T13:43:00.201373+0000 mgr.a (mgr.14150) 31 : cluster [DBG] pgmap v8: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T13:43:12.346 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:12 vm08 bash[23387]: cluster 2026-03-10T13:43:00.201373+0000 mgr.a (mgr.14150) 31 : cluster [DBG] pgmap v8: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T13:43:12.346 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:12 vm08 bash[23387]: audit 2026-03-10T13:43:01.472920+0000 mon.a (mon.0) 131 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:43:12.346 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:12 vm08 bash[23387]: audit 2026-03-10T13:43:01.472920+0000 mon.a (mon.0) 131 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:43:12.346 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:12 vm08 bash[23387]: audit 2026-03-10T13:43:02.031951+0000 mon.a (mon.0) 132 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:43:12.346 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:12 vm08 bash[23387]: audit 2026-03-10T13:43:02.031951+0000 mon.a (mon.0) 132 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:43:12.346 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:12 vm08 bash[23387]: cluster 2026-03-10T13:43:02.201603+0000 mgr.a (mgr.14150) 32 : cluster [DBG] pgmap v9: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T13:43:12.346 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:12 vm08 bash[23387]: cluster 2026-03-10T13:43:02.201603+0000 mgr.a (mgr.14150) 32 : cluster [DBG] pgmap v9: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T13:43:12.346 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:12 vm08 bash[23387]: cluster 2026-03-10T13:43:04.201823+0000 mgr.a (mgr.14150) 33 : cluster [DBG] pgmap v10: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T13:43:12.346 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:12 vm08 bash[23387]: cluster 2026-03-10T13:43:04.201823+0000 mgr.a (mgr.14150) 33 : cluster [DBG] pgmap v10: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T13:43:12.346 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:12 vm08 bash[23387]: audit 2026-03-10T13:43:04.619992+0000 mgr.a (mgr.14150) 34 : audit [DBG] from='client.14180 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-10T13:43:12.346 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:12 vm08 bash[23387]: audit 2026-03-10T13:43:04.619992+0000 mgr.a (mgr.14150) 34 : audit [DBG] from='client.14180 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-10T13:43:12.346 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:12 vm08 bash[23387]: audit 2026-03-10T13:43:04.901849+0000 mon.a (mon.0) 133 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:43:12.346 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:12 vm08 bash[23387]: audit 2026-03-10T13:43:04.901849+0000 mon.a (mon.0) 133 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:43:12.346 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:12 vm08 bash[23387]: audit 2026-03-10T13:43:04.903915+0000 mon.a (mon.0) 134 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:43:12.346 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:12 vm08 bash[23387]: audit 2026-03-10T13:43:04.903915+0000 mon.a (mon.0) 134 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:43:12.346 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:12 vm08 bash[23387]: audit 2026-03-10T13:43:04.906449+0000 mon.a (mon.0) 135 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:43:12.346 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:12 vm08 bash[23387]: audit 2026-03-10T13:43:04.906449+0000 mon.a (mon.0) 135 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:43:12.346 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:12 vm08 bash[23387]: audit 2026-03-10T13:43:04.908376+0000 mon.a (mon.0) 136 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:43:12.346 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:12 vm08 bash[23387]: audit 2026-03-10T13:43:04.908376+0000 mon.a (mon.0) 136 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:43:12.346 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:12 vm08 bash[23387]: audit 2026-03-10T13:43:04.908931+0000 mon.a (mon.0) 137 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "config rm", "who": "osd/host:vm08", "name": "osd_memory_target"}]: dispatch 2026-03-10T13:43:12.346 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:12 vm08 bash[23387]: audit 2026-03-10T13:43:04.908931+0000 mon.a (mon.0) 137 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "config rm", "who": "osd/host:vm08", "name": "osd_memory_target"}]: dispatch 2026-03-10T13:43:12.346 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:12 vm08 bash[23387]: audit 2026-03-10T13:43:04.909511+0000 mon.a (mon.0) 138 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T13:43:12.346 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:12 vm08 bash[23387]: audit 2026-03-10T13:43:04.909511+0000 mon.a (mon.0) 138 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T13:43:12.346 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:12 vm08 bash[23387]: audit 2026-03-10T13:43:04.909881+0000 mon.a (mon.0) 139 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T13:43:12.346 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:12 vm08 bash[23387]: audit 2026-03-10T13:43:04.909881+0000 mon.a (mon.0) 139 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T13:43:12.346 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:12 vm08 bash[23387]: cephadm 2026-03-10T13:43:04.910434+0000 mgr.a (mgr.14150) 35 : cephadm [INF] Updating vm08:/etc/ceph/ceph.conf 2026-03-10T13:43:12.346 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:12 vm08 bash[23387]: cephadm 2026-03-10T13:43:04.910434+0000 mgr.a (mgr.14150) 35 : cephadm [INF] Updating vm08:/etc/ceph/ceph.conf 2026-03-10T13:43:12.346 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:12 vm08 bash[23387]: cephadm 2026-03-10T13:43:04.942373+0000 mgr.a (mgr.14150) 36 : cephadm [INF] Updating vm08:/var/lib/ceph/c9620084-1c86-11f1-bcc5-e3fb709eab0a/config/ceph.conf 2026-03-10T13:43:12.346 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:12 vm08 bash[23387]: cephadm 2026-03-10T13:43:04.942373+0000 mgr.a (mgr.14150) 36 : cephadm [INF] Updating vm08:/var/lib/ceph/c9620084-1c86-11f1-bcc5-e3fb709eab0a/config/ceph.conf 2026-03-10T13:43:12.346 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:12 vm08 bash[23387]: cephadm 2026-03-10T13:43:04.972658+0000 mgr.a (mgr.14150) 37 : cephadm [INF] Updating vm08:/etc/ceph/ceph.client.admin.keyring 2026-03-10T13:43:12.346 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:12 vm08 bash[23387]: cephadm 2026-03-10T13:43:04.972658+0000 mgr.a (mgr.14150) 37 : cephadm [INF] Updating vm08:/etc/ceph/ceph.client.admin.keyring 2026-03-10T13:43:12.346 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:12 vm08 bash[23387]: cephadm 2026-03-10T13:43:05.002373+0000 mgr.a (mgr.14150) 38 : cephadm [INF] Updating vm08:/var/lib/ceph/c9620084-1c86-11f1-bcc5-e3fb709eab0a/config/ceph.client.admin.keyring 2026-03-10T13:43:12.346 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:12 vm08 bash[23387]: cephadm 2026-03-10T13:43:05.002373+0000 mgr.a (mgr.14150) 38 : cephadm [INF] Updating vm08:/var/lib/ceph/c9620084-1c86-11f1-bcc5-e3fb709eab0a/config/ceph.client.admin.keyring 2026-03-10T13:43:12.346 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:12 vm08 bash[23387]: audit 2026-03-10T13:43:05.035278+0000 mon.a (mon.0) 140 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:43:12.346 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:12 vm08 bash[23387]: audit 2026-03-10T13:43:05.035278+0000 mon.a (mon.0) 140 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:43:12.346 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:12 vm08 bash[23387]: audit 2026-03-10T13:43:05.037603+0000 mon.a (mon.0) 141 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:43:12.346 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:12 vm08 bash[23387]: audit 2026-03-10T13:43:05.037603+0000 mon.a (mon.0) 141 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:43:12.346 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:12 vm08 bash[23387]: audit 2026-03-10T13:43:05.039725+0000 mon.a (mon.0) 142 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:43:12.346 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:12 vm08 bash[23387]: audit 2026-03-10T13:43:05.039725+0000 mon.a (mon.0) 142 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:43:12.346 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:12 vm08 bash[23387]: cluster 2026-03-10T13:43:06.201989+0000 mgr.a (mgr.14150) 39 : cluster [DBG] pgmap v11: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T13:43:12.347 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:12 vm08 bash[23387]: cluster 2026-03-10T13:43:06.201989+0000 mgr.a (mgr.14150) 39 : cluster [DBG] pgmap v11: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T13:43:12.347 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:12 vm08 bash[23387]: audit 2026-03-10T13:43:08.641382+0000 mon.a (mon.0) 143 : audit [INF] from='client.? 192.168.123.100:0/4173021281' entity='client.admin' cmd=[{"prefix": "osd crush tunables", "profile": "default"}]: dispatch 2026-03-10T13:43:12.347 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:12 vm08 bash[23387]: audit 2026-03-10T13:43:08.641382+0000 mon.a (mon.0) 143 : audit [INF] from='client.? 192.168.123.100:0/4173021281' entity='client.admin' cmd=[{"prefix": "osd crush tunables", "profile": "default"}]: dispatch 2026-03-10T13:43:12.347 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:12 vm08 bash[23387]: cluster 2026-03-10T13:43:08.202137+0000 mgr.a (mgr.14150) 40 : cluster [DBG] pgmap v12: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T13:43:12.347 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:12 vm08 bash[23387]: cluster 2026-03-10T13:43:08.202137+0000 mgr.a (mgr.14150) 40 : cluster [DBG] pgmap v12: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T13:43:12.347 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:12 vm08 bash[23387]: audit 2026-03-10T13:43:08.973838+0000 mon.a (mon.0) 144 : audit [INF] from='client.? 192.168.123.100:0/4173021281' entity='client.admin' cmd='[{"prefix": "osd crush tunables", "profile": "default"}]': finished 2026-03-10T13:43:12.347 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:12 vm08 bash[23387]: audit 2026-03-10T13:43:08.973838+0000 mon.a (mon.0) 144 : audit [INF] from='client.? 192.168.123.100:0/4173021281' entity='client.admin' cmd='[{"prefix": "osd crush tunables", "profile": "default"}]': finished 2026-03-10T13:43:12.347 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:12 vm08 bash[23387]: cluster 2026-03-10T13:43:08.976200+0000 mon.a (mon.0) 145 : cluster [DBG] osdmap e4: 0 total, 0 up, 0 in 2026-03-10T13:43:12.347 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:12 vm08 bash[23387]: cluster 2026-03-10T13:43:08.976200+0000 mon.a (mon.0) 145 : cluster [DBG] osdmap e4: 0 total, 0 up, 0 in 2026-03-10T13:43:12.347 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:12 vm08 bash[23387]: cluster 2026-03-10T13:43:10.202300+0000 mgr.a (mgr.14150) 41 : cluster [DBG] pgmap v14: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T13:43:12.347 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:12 vm08 bash[23387]: cluster 2026-03-10T13:43:10.202300+0000 mgr.a (mgr.14150) 41 : cluster [DBG] pgmap v14: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T13:43:12.347 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:12 vm08 bash[23387]: audit 2026-03-10T13:43:10.391017+0000 mgr.a (mgr.14150) 42 : audit [DBG] from='client.14184 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mon", "placement": "3;vm00:192.168.123.100=a;vm07:192.168.123.107=b;vm08:192.168.123.108=c", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T13:43:12.347 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:12 vm08 bash[23387]: audit 2026-03-10T13:43:10.391017+0000 mgr.a (mgr.14150) 42 : audit [DBG] from='client.14184 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mon", "placement": "3;vm00:192.168.123.100=a;vm07:192.168.123.107=b;vm08:192.168.123.108=c", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T13:43:12.347 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:12 vm08 bash[23387]: cephadm 2026-03-10T13:43:10.392184+0000 mgr.a (mgr.14150) 43 : cephadm [INF] Saving service mon spec with placement vm00:192.168.123.100=a;vm07:192.168.123.107=b;vm08:192.168.123.108=c;count:3 2026-03-10T13:43:12.347 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:12 vm08 bash[23387]: cephadm 2026-03-10T13:43:10.392184+0000 mgr.a (mgr.14150) 43 : cephadm [INF] Saving service mon spec with placement vm00:192.168.123.100=a;vm07:192.168.123.107=b;vm08:192.168.123.108=c;count:3 2026-03-10T13:43:12.347 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:12 vm08 bash[23387]: audit 2026-03-10T13:43:10.394697+0000 mon.a (mon.0) 146 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:43:12.347 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:12 vm08 bash[23387]: audit 2026-03-10T13:43:10.394697+0000 mon.a (mon.0) 146 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:43:12.347 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:12 vm08 bash[23387]: audit 2026-03-10T13:43:10.395161+0000 mon.a (mon.0) 147 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T13:43:12.347 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:12 vm08 bash[23387]: audit 2026-03-10T13:43:10.395161+0000 mon.a (mon.0) 147 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T13:43:12.347 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:12 vm08 bash[23387]: audit 2026-03-10T13:43:10.396154+0000 mon.a (mon.0) 148 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T13:43:12.347 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:12 vm08 bash[23387]: audit 2026-03-10T13:43:10.396154+0000 mon.a (mon.0) 148 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T13:43:12.347 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:12 vm08 bash[23387]: audit 2026-03-10T13:43:10.396537+0000 mon.a (mon.0) 149 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T13:43:12.347 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:12 vm08 bash[23387]: audit 2026-03-10T13:43:10.396537+0000 mon.a (mon.0) 149 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T13:43:12.347 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:12 vm08 bash[23387]: audit 2026-03-10T13:43:10.399538+0000 mon.a (mon.0) 150 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:43:12.347 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:12 vm08 bash[23387]: audit 2026-03-10T13:43:10.399538+0000 mon.a (mon.0) 150 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:43:12.347 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:12 vm08 bash[23387]: audit 2026-03-10T13:43:10.400445+0000 mon.a (mon.0) 151 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-10T13:43:12.347 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:12 vm08 bash[23387]: audit 2026-03-10T13:43:10.400445+0000 mon.a (mon.0) 151 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-10T13:43:12.347 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:12 vm08 bash[23387]: audit 2026-03-10T13:43:10.400791+0000 mon.a (mon.0) 152 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T13:43:12.347 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:12 vm08 bash[23387]: audit 2026-03-10T13:43:10.400791+0000 mon.a (mon.0) 152 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T13:43:12.347 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:12 vm08 bash[23387]: cephadm 2026-03-10T13:43:10.401283+0000 mgr.a (mgr.14150) 44 : cephadm [INF] Deploying daemon mon.c on vm08 2026-03-10T13:43:12.347 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:12 vm08 bash[23387]: cephadm 2026-03-10T13:43:10.401283+0000 mgr.a (mgr.14150) 44 : cephadm [INF] Deploying daemon mon.c on vm08 2026-03-10T13:43:12.347 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:12 vm08 bash[23387]: debug 2026-03-10T13:43:12.042+0000 7fe35a89e640 1 mon.c@-1(synchronizing).paxosservice(auth 1..3) refresh upgraded, format 0 -> 3 2026-03-10T13:43:13.749 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:43:13 vm07 bash[23044]: debug 2026-03-10T13:43:13.470+0000 7f7b4013e640 1 mon.b@-1(synchronizing).paxosservice(auth 1..3) refresh upgraded, format 0 -> 3 2026-03-10T13:43:17.071 INFO:teuthology.orchestra.run.vm08.stdout: 2026-03-10T13:43:17.071 INFO:teuthology.orchestra.run.vm08.stdout:{"epoch":2,"fsid":"c9620084-1c86-11f1-bcc5-e3fb709eab0a","modified":"2026-03-10T13:43:12.057421Z","created":"2026-03-10T13:42:07.014183Z","min_mon_release":19,"min_mon_release_name":"squid","election_strategy":1,"disallowed_leaders":"","stretch_mode":false,"tiebreaker_mon":"","removed_ranks":"","features":{"persistent":["kraken","luminous","mimic","osdmap-prune","nautilus","octopus","pacific","elector-pinging","quincy","reef","squid"],"optional":[]},"mons":[{"rank":0,"name":"a","public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.100:3300","nonce":0},{"type":"v1","addr":"192.168.123.100:6789","nonce":0}]},"addr":"192.168.123.100:6789/0","public_addr":"192.168.123.100:6789/0","priority":0,"weight":0,"crush_location":"{}"},{"rank":1,"name":"c","public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.108:3300","nonce":0},{"type":"v1","addr":"192.168.123.108:6789","nonce":0}]},"addr":"192.168.123.108:6789/0","public_addr":"192.168.123.108:6789/0","priority":0,"weight":0,"crush_location":"{}"}],"quorum":[0,1]} 2026-03-10T13:43:17.071 INFO:teuthology.orchestra.run.vm08.stderr:dumped monmap epoch 2 2026-03-10T13:43:17.338 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:17 vm08 bash[23387]: audit 2026-03-10T13:43:12.061019+0000 mon.a (mon.0) 159 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-10T13:43:17.338 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:17 vm08 bash[23387]: audit 2026-03-10T13:43:12.061019+0000 mon.a (mon.0) 159 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-10T13:43:17.338 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:17 vm08 bash[23387]: audit 2026-03-10T13:43:12.061238+0000 mon.a (mon.0) 160 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T13:43:17.338 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:17 vm08 bash[23387]: audit 2026-03-10T13:43:12.061238+0000 mon.a (mon.0) 160 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T13:43:17.338 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:17 vm08 bash[23387]: cluster 2026-03-10T13:43:12.061434+0000 mon.a (mon.0) 161 : cluster [INF] mon.a calling monitor election 2026-03-10T13:43:17.338 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:17 vm08 bash[23387]: cluster 2026-03-10T13:43:12.061434+0000 mon.a (mon.0) 161 : cluster [INF] mon.a calling monitor election 2026-03-10T13:43:17.338 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:17 vm08 bash[23387]: audit 2026-03-10T13:43:12.120889+0000 mon.a (mon.0) 162 : audit [DBG] from='client.? 192.168.123.108:0/2693467842' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch 2026-03-10T13:43:17.338 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:17 vm08 bash[23387]: audit 2026-03-10T13:43:12.120889+0000 mon.a (mon.0) 162 : audit [DBG] from='client.? 192.168.123.108:0/2693467842' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch 2026-03-10T13:43:17.338 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:17 vm08 bash[23387]: cluster 2026-03-10T13:43:12.202466+0000 mgr.a (mgr.14150) 46 : cluster [DBG] pgmap v15: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T13:43:17.338 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:17 vm08 bash[23387]: cluster 2026-03-10T13:43:12.202466+0000 mgr.a (mgr.14150) 46 : cluster [DBG] pgmap v15: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T13:43:17.338 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:17 vm08 bash[23387]: audit 2026-03-10T13:43:13.049678+0000 mon.a (mon.0) 163 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T13:43:17.338 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:17 vm08 bash[23387]: audit 2026-03-10T13:43:13.049678+0000 mon.a (mon.0) 163 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T13:43:17.338 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:17 vm08 bash[23387]: audit 2026-03-10T13:43:13.480100+0000 mon.a (mon.0) 164 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T13:43:17.338 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:17 vm08 bash[23387]: audit 2026-03-10T13:43:13.480100+0000 mon.a (mon.0) 164 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T13:43:17.338 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:17 vm08 bash[23387]: audit 2026-03-10T13:43:14.049794+0000 mon.a (mon.0) 165 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T13:43:17.338 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:17 vm08 bash[23387]: audit 2026-03-10T13:43:14.049794+0000 mon.a (mon.0) 165 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T13:43:17.338 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:17 vm08 bash[23387]: cluster 2026-03-10T13:43:14.057641+0000 mon.c (mon.1) 1 : cluster [INF] mon.c calling monitor election 2026-03-10T13:43:17.338 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:17 vm08 bash[23387]: cluster 2026-03-10T13:43:14.057641+0000 mon.c (mon.1) 1 : cluster [INF] mon.c calling monitor election 2026-03-10T13:43:17.338 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:17 vm08 bash[23387]: cluster 2026-03-10T13:43:14.202654+0000 mgr.a (mgr.14150) 47 : cluster [DBG] pgmap v16: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T13:43:17.338 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:17 vm08 bash[23387]: cluster 2026-03-10T13:43:14.202654+0000 mgr.a (mgr.14150) 47 : cluster [DBG] pgmap v16: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T13:43:17.338 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:17 vm08 bash[23387]: audit 2026-03-10T13:43:14.480215+0000 mon.a (mon.0) 166 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T13:43:17.338 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:17 vm08 bash[23387]: audit 2026-03-10T13:43:14.480215+0000 mon.a (mon.0) 166 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T13:43:17.338 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:17 vm08 bash[23387]: audit 2026-03-10T13:43:15.049807+0000 mon.a (mon.0) 167 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T13:43:17.338 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:17 vm08 bash[23387]: audit 2026-03-10T13:43:15.049807+0000 mon.a (mon.0) 167 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T13:43:17.338 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:17 vm08 bash[23387]: audit 2026-03-10T13:43:15.480499+0000 mon.a (mon.0) 168 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T13:43:17.338 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:17 vm08 bash[23387]: audit 2026-03-10T13:43:15.480499+0000 mon.a (mon.0) 168 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T13:43:17.338 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:17 vm08 bash[23387]: audit 2026-03-10T13:43:16.050040+0000 mon.a (mon.0) 169 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T13:43:17.338 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:17 vm08 bash[23387]: audit 2026-03-10T13:43:16.050040+0000 mon.a (mon.0) 169 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T13:43:17.338 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:17 vm08 bash[23387]: cluster 2026-03-10T13:43:16.202811+0000 mgr.a (mgr.14150) 48 : cluster [DBG] pgmap v17: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T13:43:17.338 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:17 vm08 bash[23387]: cluster 2026-03-10T13:43:16.202811+0000 mgr.a (mgr.14150) 48 : cluster [DBG] pgmap v17: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T13:43:17.339 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:17 vm08 bash[23387]: audit 2026-03-10T13:43:16.480520+0000 mon.a (mon.0) 170 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T13:43:17.339 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:17 vm08 bash[23387]: audit 2026-03-10T13:43:16.480520+0000 mon.a (mon.0) 170 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T13:43:17.339 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:17 vm08 bash[23387]: audit 2026-03-10T13:43:17.050045+0000 mon.a (mon.0) 171 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T13:43:17.339 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:17 vm08 bash[23387]: audit 2026-03-10T13:43:17.050045+0000 mon.a (mon.0) 171 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T13:43:17.339 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:17 vm08 bash[23387]: cluster 2026-03-10T13:43:17.067841+0000 mon.a (mon.0) 172 : cluster [INF] mon.a is new leader, mons a,c in quorum (ranks 0,1) 2026-03-10T13:43:17.339 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:17 vm08 bash[23387]: cluster 2026-03-10T13:43:17.067841+0000 mon.a (mon.0) 172 : cluster [INF] mon.a is new leader, mons a,c in quorum (ranks 0,1) 2026-03-10T13:43:17.339 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:17 vm08 bash[23387]: cluster 2026-03-10T13:43:17.071200+0000 mon.a (mon.0) 173 : cluster [DBG] monmap epoch 2 2026-03-10T13:43:17.339 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:17 vm08 bash[23387]: cluster 2026-03-10T13:43:17.071200+0000 mon.a (mon.0) 173 : cluster [DBG] monmap epoch 2 2026-03-10T13:43:17.339 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:17 vm08 bash[23387]: cluster 2026-03-10T13:43:17.071220+0000 mon.a (mon.0) 174 : cluster [DBG] fsid c9620084-1c86-11f1-bcc5-e3fb709eab0a 2026-03-10T13:43:17.339 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:17 vm08 bash[23387]: cluster 2026-03-10T13:43:17.071220+0000 mon.a (mon.0) 174 : cluster [DBG] fsid c9620084-1c86-11f1-bcc5-e3fb709eab0a 2026-03-10T13:43:17.339 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:17 vm08 bash[23387]: cluster 2026-03-10T13:43:17.071230+0000 mon.a (mon.0) 175 : cluster [DBG] last_changed 2026-03-10T13:43:12.057421+0000 2026-03-10T13:43:17.339 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:17 vm08 bash[23387]: cluster 2026-03-10T13:43:17.071230+0000 mon.a (mon.0) 175 : cluster [DBG] last_changed 2026-03-10T13:43:12.057421+0000 2026-03-10T13:43:17.339 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:17 vm08 bash[23387]: cluster 2026-03-10T13:43:17.071238+0000 mon.a (mon.0) 176 : cluster [DBG] created 2026-03-10T13:42:07.014183+0000 2026-03-10T13:43:17.339 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:17 vm08 bash[23387]: cluster 2026-03-10T13:43:17.071238+0000 mon.a (mon.0) 176 : cluster [DBG] created 2026-03-10T13:42:07.014183+0000 2026-03-10T13:43:17.339 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:17 vm08 bash[23387]: cluster 2026-03-10T13:43:17.071247+0000 mon.a (mon.0) 177 : cluster [DBG] min_mon_release 19 (squid) 2026-03-10T13:43:17.339 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:17 vm08 bash[23387]: cluster 2026-03-10T13:43:17.071247+0000 mon.a (mon.0) 177 : cluster [DBG] min_mon_release 19 (squid) 2026-03-10T13:43:17.339 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:17 vm08 bash[23387]: cluster 2026-03-10T13:43:17.071255+0000 mon.a (mon.0) 178 : cluster [DBG] election_strategy: 1 2026-03-10T13:43:17.339 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:17 vm08 bash[23387]: cluster 2026-03-10T13:43:17.071255+0000 mon.a (mon.0) 178 : cluster [DBG] election_strategy: 1 2026-03-10T13:43:17.339 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:17 vm08 bash[23387]: cluster 2026-03-10T13:43:17.071303+0000 mon.a (mon.0) 179 : cluster [DBG] 0: [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] mon.a 2026-03-10T13:43:17.339 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:17 vm08 bash[23387]: cluster 2026-03-10T13:43:17.071303+0000 mon.a (mon.0) 179 : cluster [DBG] 0: [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] mon.a 2026-03-10T13:43:17.339 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:17 vm08 bash[23387]: cluster 2026-03-10T13:43:17.071312+0000 mon.a (mon.0) 180 : cluster [DBG] 1: [v2:192.168.123.108:3300/0,v1:192.168.123.108:6789/0] mon.c 2026-03-10T13:43:17.339 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:17 vm08 bash[23387]: cluster 2026-03-10T13:43:17.071312+0000 mon.a (mon.0) 180 : cluster [DBG] 1: [v2:192.168.123.108:3300/0,v1:192.168.123.108:6789/0] mon.c 2026-03-10T13:43:17.339 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:17 vm08 bash[23387]: cluster 2026-03-10T13:43:17.071691+0000 mon.a (mon.0) 181 : cluster [DBG] fsmap 2026-03-10T13:43:17.339 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:17 vm08 bash[23387]: cluster 2026-03-10T13:43:17.071691+0000 mon.a (mon.0) 181 : cluster [DBG] fsmap 2026-03-10T13:43:17.339 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:17 vm08 bash[23387]: cluster 2026-03-10T13:43:17.071714+0000 mon.a (mon.0) 182 : cluster [DBG] osdmap e4: 0 total, 0 up, 0 in 2026-03-10T13:43:17.339 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:17 vm08 bash[23387]: cluster 2026-03-10T13:43:17.071714+0000 mon.a (mon.0) 182 : cluster [DBG] osdmap e4: 0 total, 0 up, 0 in 2026-03-10T13:43:17.339 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:17 vm08 bash[23387]: cluster 2026-03-10T13:43:17.071878+0000 mon.a (mon.0) 183 : cluster [DBG] mgrmap e12: a(active, since 46s) 2026-03-10T13:43:17.339 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:17 vm08 bash[23387]: cluster 2026-03-10T13:43:17.071878+0000 mon.a (mon.0) 183 : cluster [DBG] mgrmap e12: a(active, since 46s) 2026-03-10T13:43:17.339 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:17 vm08 bash[23387]: cluster 2026-03-10T13:43:17.071958+0000 mon.a (mon.0) 184 : cluster [INF] overall HEALTH_OK 2026-03-10T13:43:17.339 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:17 vm08 bash[23387]: cluster 2026-03-10T13:43:17.071958+0000 mon.a (mon.0) 184 : cluster [INF] overall HEALTH_OK 2026-03-10T13:43:17.339 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:17 vm08 bash[23387]: audit 2026-03-10T13:43:17.077591+0000 mon.a (mon.0) 185 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:43:17.339 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:17 vm08 bash[23387]: audit 2026-03-10T13:43:17.077591+0000 mon.a (mon.0) 185 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:43:17.339 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:17 vm08 bash[23387]: audit 2026-03-10T13:43:17.082074+0000 mon.a (mon.0) 186 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:43:17.339 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:17 vm08 bash[23387]: audit 2026-03-10T13:43:17.082074+0000 mon.a (mon.0) 186 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:43:17.339 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:17 vm08 bash[23387]: audit 2026-03-10T13:43:17.091895+0000 mon.a (mon.0) 187 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:43:17.339 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:17 vm08 bash[23387]: audit 2026-03-10T13:43:17.091895+0000 mon.a (mon.0) 187 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:43:17.339 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:17 vm08 bash[23387]: audit 2026-03-10T13:43:17.095745+0000 mon.a (mon.0) 188 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:43:17.339 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:17 vm08 bash[23387]: audit 2026-03-10T13:43:17.095745+0000 mon.a (mon.0) 188 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:43:17.339 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:17 vm08 bash[23387]: audit 2026-03-10T13:43:17.112744+0000 mon.a (mon.0) 189 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T13:43:17.339 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:17 vm08 bash[23387]: audit 2026-03-10T13:43:17.112744+0000 mon.a (mon.0) 189 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T13:43:17.466 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:43:17 vm00 bash[20748]: audit 2026-03-10T13:43:12.061019+0000 mon.a (mon.0) 159 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-10T13:43:17.466 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:43:17 vm00 bash[20748]: audit 2026-03-10T13:43:12.061019+0000 mon.a (mon.0) 159 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-10T13:43:17.466 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:43:17 vm00 bash[20748]: audit 2026-03-10T13:43:12.061238+0000 mon.a (mon.0) 160 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T13:43:17.466 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:43:17 vm00 bash[20748]: audit 2026-03-10T13:43:12.061238+0000 mon.a (mon.0) 160 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T13:43:17.467 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:43:17 vm00 bash[20748]: cluster 2026-03-10T13:43:12.061434+0000 mon.a (mon.0) 161 : cluster [INF] mon.a calling monitor election 2026-03-10T13:43:17.467 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:43:17 vm00 bash[20748]: cluster 2026-03-10T13:43:12.061434+0000 mon.a (mon.0) 161 : cluster [INF] mon.a calling monitor election 2026-03-10T13:43:17.467 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:43:17 vm00 bash[20748]: audit 2026-03-10T13:43:12.120889+0000 mon.a (mon.0) 162 : audit [DBG] from='client.? 192.168.123.108:0/2693467842' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch 2026-03-10T13:43:17.467 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:43:17 vm00 bash[20748]: audit 2026-03-10T13:43:12.120889+0000 mon.a (mon.0) 162 : audit [DBG] from='client.? 192.168.123.108:0/2693467842' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch 2026-03-10T13:43:17.467 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:43:17 vm00 bash[20748]: cluster 2026-03-10T13:43:12.202466+0000 mgr.a (mgr.14150) 46 : cluster [DBG] pgmap v15: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T13:43:17.467 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:43:17 vm00 bash[20748]: cluster 2026-03-10T13:43:12.202466+0000 mgr.a (mgr.14150) 46 : cluster [DBG] pgmap v15: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T13:43:17.467 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:43:17 vm00 bash[20748]: audit 2026-03-10T13:43:13.049678+0000 mon.a (mon.0) 163 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T13:43:17.467 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:43:17 vm00 bash[20748]: audit 2026-03-10T13:43:13.049678+0000 mon.a (mon.0) 163 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T13:43:17.467 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:43:17 vm00 bash[20748]: audit 2026-03-10T13:43:13.480100+0000 mon.a (mon.0) 164 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T13:43:17.467 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:43:17 vm00 bash[20748]: audit 2026-03-10T13:43:13.480100+0000 mon.a (mon.0) 164 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T13:43:17.467 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:43:17 vm00 bash[20748]: audit 2026-03-10T13:43:14.049794+0000 mon.a (mon.0) 165 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T13:43:17.467 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:43:17 vm00 bash[20748]: audit 2026-03-10T13:43:14.049794+0000 mon.a (mon.0) 165 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T13:43:17.467 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:43:17 vm00 bash[20748]: cluster 2026-03-10T13:43:14.057641+0000 mon.c (mon.1) 1 : cluster [INF] mon.c calling monitor election 2026-03-10T13:43:17.467 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:43:17 vm00 bash[20748]: cluster 2026-03-10T13:43:14.057641+0000 mon.c (mon.1) 1 : cluster [INF] mon.c calling monitor election 2026-03-10T13:43:17.467 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:43:17 vm00 bash[20748]: cluster 2026-03-10T13:43:14.202654+0000 mgr.a (mgr.14150) 47 : cluster [DBG] pgmap v16: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T13:43:17.467 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:43:17 vm00 bash[20748]: cluster 2026-03-10T13:43:14.202654+0000 mgr.a (mgr.14150) 47 : cluster [DBG] pgmap v16: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T13:43:17.467 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:43:17 vm00 bash[20748]: audit 2026-03-10T13:43:14.480215+0000 mon.a (mon.0) 166 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T13:43:17.467 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:43:17 vm00 bash[20748]: audit 2026-03-10T13:43:14.480215+0000 mon.a (mon.0) 166 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T13:43:17.467 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:43:17 vm00 bash[20748]: audit 2026-03-10T13:43:15.049807+0000 mon.a (mon.0) 167 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T13:43:17.467 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:43:17 vm00 bash[20748]: audit 2026-03-10T13:43:15.049807+0000 mon.a (mon.0) 167 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T13:43:17.467 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:43:17 vm00 bash[20748]: audit 2026-03-10T13:43:15.480499+0000 mon.a (mon.0) 168 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T13:43:17.467 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:43:17 vm00 bash[20748]: audit 2026-03-10T13:43:15.480499+0000 mon.a (mon.0) 168 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T13:43:17.467 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:43:17 vm00 bash[20748]: audit 2026-03-10T13:43:16.050040+0000 mon.a (mon.0) 169 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T13:43:17.467 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:43:17 vm00 bash[20748]: audit 2026-03-10T13:43:16.050040+0000 mon.a (mon.0) 169 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T13:43:17.467 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:43:17 vm00 bash[20748]: cluster 2026-03-10T13:43:16.202811+0000 mgr.a (mgr.14150) 48 : cluster [DBG] pgmap v17: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T13:43:17.467 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:43:17 vm00 bash[20748]: cluster 2026-03-10T13:43:16.202811+0000 mgr.a (mgr.14150) 48 : cluster [DBG] pgmap v17: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T13:43:17.467 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:43:17 vm00 bash[20748]: audit 2026-03-10T13:43:16.480520+0000 mon.a (mon.0) 170 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T13:43:17.467 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:43:17 vm00 bash[20748]: audit 2026-03-10T13:43:16.480520+0000 mon.a (mon.0) 170 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T13:43:17.467 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:43:17 vm00 bash[20748]: audit 2026-03-10T13:43:17.050045+0000 mon.a (mon.0) 171 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T13:43:17.467 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:43:17 vm00 bash[20748]: audit 2026-03-10T13:43:17.050045+0000 mon.a (mon.0) 171 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T13:43:17.467 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:43:17 vm00 bash[20748]: cluster 2026-03-10T13:43:17.067841+0000 mon.a (mon.0) 172 : cluster [INF] mon.a is new leader, mons a,c in quorum (ranks 0,1) 2026-03-10T13:43:17.467 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:43:17 vm00 bash[20748]: cluster 2026-03-10T13:43:17.067841+0000 mon.a (mon.0) 172 : cluster [INF] mon.a is new leader, mons a,c in quorum (ranks 0,1) 2026-03-10T13:43:17.467 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:43:17 vm00 bash[20748]: cluster 2026-03-10T13:43:17.071200+0000 mon.a (mon.0) 173 : cluster [DBG] monmap epoch 2 2026-03-10T13:43:17.467 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:43:17 vm00 bash[20748]: cluster 2026-03-10T13:43:17.071200+0000 mon.a (mon.0) 173 : cluster [DBG] monmap epoch 2 2026-03-10T13:43:17.467 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:43:17 vm00 bash[20748]: cluster 2026-03-10T13:43:17.071220+0000 mon.a (mon.0) 174 : cluster [DBG] fsid c9620084-1c86-11f1-bcc5-e3fb709eab0a 2026-03-10T13:43:17.467 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:43:17 vm00 bash[20748]: cluster 2026-03-10T13:43:17.071220+0000 mon.a (mon.0) 174 : cluster [DBG] fsid c9620084-1c86-11f1-bcc5-e3fb709eab0a 2026-03-10T13:43:17.467 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:43:17 vm00 bash[20748]: cluster 2026-03-10T13:43:17.071230+0000 mon.a (mon.0) 175 : cluster [DBG] last_changed 2026-03-10T13:43:12.057421+0000 2026-03-10T13:43:17.467 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:43:17 vm00 bash[20748]: cluster 2026-03-10T13:43:17.071230+0000 mon.a (mon.0) 175 : cluster [DBG] last_changed 2026-03-10T13:43:12.057421+0000 2026-03-10T13:43:17.467 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:43:17 vm00 bash[20748]: cluster 2026-03-10T13:43:17.071238+0000 mon.a (mon.0) 176 : cluster [DBG] created 2026-03-10T13:42:07.014183+0000 2026-03-10T13:43:17.467 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:43:17 vm00 bash[20748]: cluster 2026-03-10T13:43:17.071238+0000 mon.a (mon.0) 176 : cluster [DBG] created 2026-03-10T13:42:07.014183+0000 2026-03-10T13:43:17.467 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:43:17 vm00 bash[20748]: cluster 2026-03-10T13:43:17.071247+0000 mon.a (mon.0) 177 : cluster [DBG] min_mon_release 19 (squid) 2026-03-10T13:43:17.467 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:43:17 vm00 bash[20748]: cluster 2026-03-10T13:43:17.071247+0000 mon.a (mon.0) 177 : cluster [DBG] min_mon_release 19 (squid) 2026-03-10T13:43:17.467 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:43:17 vm00 bash[20748]: cluster 2026-03-10T13:43:17.071255+0000 mon.a (mon.0) 178 : cluster [DBG] election_strategy: 1 2026-03-10T13:43:17.467 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:43:17 vm00 bash[20748]: cluster 2026-03-10T13:43:17.071255+0000 mon.a (mon.0) 178 : cluster [DBG] election_strategy: 1 2026-03-10T13:43:17.467 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:43:17 vm00 bash[20748]: cluster 2026-03-10T13:43:17.071303+0000 mon.a (mon.0) 179 : cluster [DBG] 0: [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] mon.a 2026-03-10T13:43:17.467 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:43:17 vm00 bash[20748]: cluster 2026-03-10T13:43:17.071303+0000 mon.a (mon.0) 179 : cluster [DBG] 0: [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] mon.a 2026-03-10T13:43:17.467 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:43:17 vm00 bash[20748]: cluster 2026-03-10T13:43:17.071312+0000 mon.a (mon.0) 180 : cluster [DBG] 1: [v2:192.168.123.108:3300/0,v1:192.168.123.108:6789/0] mon.c 2026-03-10T13:43:17.467 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:43:17 vm00 bash[20748]: cluster 2026-03-10T13:43:17.071312+0000 mon.a (mon.0) 180 : cluster [DBG] 1: [v2:192.168.123.108:3300/0,v1:192.168.123.108:6789/0] mon.c 2026-03-10T13:43:17.467 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:43:17 vm00 bash[20748]: cluster 2026-03-10T13:43:17.071691+0000 mon.a (mon.0) 181 : cluster [DBG] fsmap 2026-03-10T13:43:17.467 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:43:17 vm00 bash[20748]: cluster 2026-03-10T13:43:17.071691+0000 mon.a (mon.0) 181 : cluster [DBG] fsmap 2026-03-10T13:43:17.467 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:43:17 vm00 bash[20748]: cluster 2026-03-10T13:43:17.071714+0000 mon.a (mon.0) 182 : cluster [DBG] osdmap e4: 0 total, 0 up, 0 in 2026-03-10T13:43:17.467 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:43:17 vm00 bash[20748]: cluster 2026-03-10T13:43:17.071714+0000 mon.a (mon.0) 182 : cluster [DBG] osdmap e4: 0 total, 0 up, 0 in 2026-03-10T13:43:17.467 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:43:17 vm00 bash[20748]: cluster 2026-03-10T13:43:17.071878+0000 mon.a (mon.0) 183 : cluster [DBG] mgrmap e12: a(active, since 46s) 2026-03-10T13:43:17.467 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:43:17 vm00 bash[20748]: cluster 2026-03-10T13:43:17.071878+0000 mon.a (mon.0) 183 : cluster [DBG] mgrmap e12: a(active, since 46s) 2026-03-10T13:43:17.467 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:43:17 vm00 bash[20748]: cluster 2026-03-10T13:43:17.071958+0000 mon.a (mon.0) 184 : cluster [INF] overall HEALTH_OK 2026-03-10T13:43:17.468 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:43:17 vm00 bash[20748]: cluster 2026-03-10T13:43:17.071958+0000 mon.a (mon.0) 184 : cluster [INF] overall HEALTH_OK 2026-03-10T13:43:17.468 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:43:17 vm00 bash[20748]: audit 2026-03-10T13:43:17.077591+0000 mon.a (mon.0) 185 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:43:17.468 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:43:17 vm00 bash[20748]: audit 2026-03-10T13:43:17.077591+0000 mon.a (mon.0) 185 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:43:17.468 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:43:17 vm00 bash[20748]: audit 2026-03-10T13:43:17.082074+0000 mon.a (mon.0) 186 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:43:17.468 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:43:17 vm00 bash[20748]: audit 2026-03-10T13:43:17.082074+0000 mon.a (mon.0) 186 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:43:17.468 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:43:17 vm00 bash[20748]: audit 2026-03-10T13:43:17.091895+0000 mon.a (mon.0) 187 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:43:17.468 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:43:17 vm00 bash[20748]: audit 2026-03-10T13:43:17.091895+0000 mon.a (mon.0) 187 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:43:17.468 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:43:17 vm00 bash[20748]: audit 2026-03-10T13:43:17.095745+0000 mon.a (mon.0) 188 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:43:17.468 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:43:17 vm00 bash[20748]: audit 2026-03-10T13:43:17.095745+0000 mon.a (mon.0) 188 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:43:17.468 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:43:17 vm00 bash[20748]: audit 2026-03-10T13:43:17.112744+0000 mon.a (mon.0) 189 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T13:43:17.468 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:43:17 vm00 bash[20748]: audit 2026-03-10T13:43:17.112744+0000 mon.a (mon.0) 189 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T13:43:18.149 INFO:tasks.cephadm:Waiting for 3 mons in monmap... 2026-03-10T13:43:18.150 DEBUG:teuthology.orchestra.run.vm08:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid c9620084-1c86-11f1-bcc5-e3fb709eab0a -- ceph mon dump -f json 2026-03-10T13:43:18.466 INFO:journalctl@ceph.mgr.a.vm00.stdout:Mar 10 13:43:18 vm00 bash[21015]: debug 2026-03-10T13:43:18.048+0000 7f3d7e274640 -1 mgr.server handle_report got status from non-daemon mon.c 2026-03-10T13:43:21.875 INFO:teuthology.orchestra.run.vm08.stderr:Inferring config /var/lib/ceph/c9620084-1c86-11f1-bcc5-e3fb709eab0a/mon.c/config 2026-03-10T13:43:22.750 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:43:22 vm07 bash[23044]: audit 2026-03-10T13:43:12.061019+0000 mon.a (mon.0) 159 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-10T13:43:22.750 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:43:22 vm07 bash[23044]: audit 2026-03-10T13:43:12.061019+0000 mon.a (mon.0) 159 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-10T13:43:22.750 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:43:22 vm07 bash[23044]: audit 2026-03-10T13:43:12.061238+0000 mon.a (mon.0) 160 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T13:43:22.750 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:43:22 vm07 bash[23044]: audit 2026-03-10T13:43:12.061238+0000 mon.a (mon.0) 160 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T13:43:22.750 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:43:22 vm07 bash[23044]: cluster 2026-03-10T13:43:12.061434+0000 mon.a (mon.0) 161 : cluster [INF] mon.a calling monitor election 2026-03-10T13:43:22.750 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:43:22 vm07 bash[23044]: cluster 2026-03-10T13:43:12.061434+0000 mon.a (mon.0) 161 : cluster [INF] mon.a calling monitor election 2026-03-10T13:43:22.750 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:43:22 vm07 bash[23044]: audit 2026-03-10T13:43:12.120889+0000 mon.a (mon.0) 162 : audit [DBG] from='client.? 192.168.123.108:0/2693467842' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch 2026-03-10T13:43:22.750 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:43:22 vm07 bash[23044]: audit 2026-03-10T13:43:12.120889+0000 mon.a (mon.0) 162 : audit [DBG] from='client.? 192.168.123.108:0/2693467842' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch 2026-03-10T13:43:22.750 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:43:22 vm07 bash[23044]: cluster 2026-03-10T13:43:12.202466+0000 mgr.a (mgr.14150) 46 : cluster [DBG] pgmap v15: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T13:43:22.750 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:43:22 vm07 bash[23044]: cluster 2026-03-10T13:43:12.202466+0000 mgr.a (mgr.14150) 46 : cluster [DBG] pgmap v15: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T13:43:22.750 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:43:22 vm07 bash[23044]: audit 2026-03-10T13:43:13.049678+0000 mon.a (mon.0) 163 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T13:43:22.750 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:43:22 vm07 bash[23044]: audit 2026-03-10T13:43:13.049678+0000 mon.a (mon.0) 163 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T13:43:22.750 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:43:22 vm07 bash[23044]: audit 2026-03-10T13:43:13.480100+0000 mon.a (mon.0) 164 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T13:43:22.750 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:43:22 vm07 bash[23044]: audit 2026-03-10T13:43:13.480100+0000 mon.a (mon.0) 164 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T13:43:22.750 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:43:22 vm07 bash[23044]: audit 2026-03-10T13:43:14.049794+0000 mon.a (mon.0) 165 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T13:43:22.750 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:43:22 vm07 bash[23044]: audit 2026-03-10T13:43:14.049794+0000 mon.a (mon.0) 165 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T13:43:22.750 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:43:22 vm07 bash[23044]: cluster 2026-03-10T13:43:14.057641+0000 mon.c (mon.1) 1 : cluster [INF] mon.c calling monitor election 2026-03-10T13:43:22.750 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:43:22 vm07 bash[23044]: cluster 2026-03-10T13:43:14.057641+0000 mon.c (mon.1) 1 : cluster [INF] mon.c calling monitor election 2026-03-10T13:43:22.750 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:43:22 vm07 bash[23044]: cluster 2026-03-10T13:43:14.202654+0000 mgr.a (mgr.14150) 47 : cluster [DBG] pgmap v16: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T13:43:22.750 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:43:22 vm07 bash[23044]: cluster 2026-03-10T13:43:14.202654+0000 mgr.a (mgr.14150) 47 : cluster [DBG] pgmap v16: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T13:43:22.750 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:43:22 vm07 bash[23044]: audit 2026-03-10T13:43:14.480215+0000 mon.a (mon.0) 166 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T13:43:22.750 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:43:22 vm07 bash[23044]: audit 2026-03-10T13:43:14.480215+0000 mon.a (mon.0) 166 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T13:43:22.750 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:43:22 vm07 bash[23044]: audit 2026-03-10T13:43:15.049807+0000 mon.a (mon.0) 167 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T13:43:22.750 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:43:22 vm07 bash[23044]: audit 2026-03-10T13:43:15.049807+0000 mon.a (mon.0) 167 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T13:43:22.750 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:43:22 vm07 bash[23044]: audit 2026-03-10T13:43:15.480499+0000 mon.a (mon.0) 168 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T13:43:22.750 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:43:22 vm07 bash[23044]: audit 2026-03-10T13:43:15.480499+0000 mon.a (mon.0) 168 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T13:43:22.750 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:43:22 vm07 bash[23044]: audit 2026-03-10T13:43:16.050040+0000 mon.a (mon.0) 169 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T13:43:22.750 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:43:22 vm07 bash[23044]: audit 2026-03-10T13:43:16.050040+0000 mon.a (mon.0) 169 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T13:43:22.750 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:43:22 vm07 bash[23044]: cluster 2026-03-10T13:43:16.202811+0000 mgr.a (mgr.14150) 48 : cluster [DBG] pgmap v17: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T13:43:22.751 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:43:22 vm07 bash[23044]: cluster 2026-03-10T13:43:16.202811+0000 mgr.a (mgr.14150) 48 : cluster [DBG] pgmap v17: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T13:43:22.751 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:43:22 vm07 bash[23044]: audit 2026-03-10T13:43:16.480520+0000 mon.a (mon.0) 170 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T13:43:22.751 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:43:22 vm07 bash[23044]: audit 2026-03-10T13:43:16.480520+0000 mon.a (mon.0) 170 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T13:43:22.751 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:43:22 vm07 bash[23044]: audit 2026-03-10T13:43:17.050045+0000 mon.a (mon.0) 171 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T13:43:22.751 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:43:22 vm07 bash[23044]: audit 2026-03-10T13:43:17.050045+0000 mon.a (mon.0) 171 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T13:43:22.751 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:43:22 vm07 bash[23044]: cluster 2026-03-10T13:43:17.067841+0000 mon.a (mon.0) 172 : cluster [INF] mon.a is new leader, mons a,c in quorum (ranks 0,1) 2026-03-10T13:43:22.751 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:43:22 vm07 bash[23044]: cluster 2026-03-10T13:43:17.067841+0000 mon.a (mon.0) 172 : cluster [INF] mon.a is new leader, mons a,c in quorum (ranks 0,1) 2026-03-10T13:43:22.751 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:43:22 vm07 bash[23044]: cluster 2026-03-10T13:43:17.071200+0000 mon.a (mon.0) 173 : cluster [DBG] monmap epoch 2 2026-03-10T13:43:22.751 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:43:22 vm07 bash[23044]: cluster 2026-03-10T13:43:17.071200+0000 mon.a (mon.0) 173 : cluster [DBG] monmap epoch 2 2026-03-10T13:43:22.751 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:43:22 vm07 bash[23044]: cluster 2026-03-10T13:43:17.071220+0000 mon.a (mon.0) 174 : cluster [DBG] fsid c9620084-1c86-11f1-bcc5-e3fb709eab0a 2026-03-10T13:43:22.751 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:43:22 vm07 bash[23044]: cluster 2026-03-10T13:43:17.071220+0000 mon.a (mon.0) 174 : cluster [DBG] fsid c9620084-1c86-11f1-bcc5-e3fb709eab0a 2026-03-10T13:43:22.751 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:43:22 vm07 bash[23044]: cluster 2026-03-10T13:43:17.071230+0000 mon.a (mon.0) 175 : cluster [DBG] last_changed 2026-03-10T13:43:12.057421+0000 2026-03-10T13:43:22.751 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:43:22 vm07 bash[23044]: cluster 2026-03-10T13:43:17.071230+0000 mon.a (mon.0) 175 : cluster [DBG] last_changed 2026-03-10T13:43:12.057421+0000 2026-03-10T13:43:22.751 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:43:22 vm07 bash[23044]: cluster 2026-03-10T13:43:17.071238+0000 mon.a (mon.0) 176 : cluster [DBG] created 2026-03-10T13:42:07.014183+0000 2026-03-10T13:43:22.751 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:43:22 vm07 bash[23044]: cluster 2026-03-10T13:43:17.071238+0000 mon.a (mon.0) 176 : cluster [DBG] created 2026-03-10T13:42:07.014183+0000 2026-03-10T13:43:22.751 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:43:22 vm07 bash[23044]: cluster 2026-03-10T13:43:17.071247+0000 mon.a (mon.0) 177 : cluster [DBG] min_mon_release 19 (squid) 2026-03-10T13:43:22.751 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:43:22 vm07 bash[23044]: cluster 2026-03-10T13:43:17.071247+0000 mon.a (mon.0) 177 : cluster [DBG] min_mon_release 19 (squid) 2026-03-10T13:43:22.751 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:43:22 vm07 bash[23044]: cluster 2026-03-10T13:43:17.071255+0000 mon.a (mon.0) 178 : cluster [DBG] election_strategy: 1 2026-03-10T13:43:22.751 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:43:22 vm07 bash[23044]: cluster 2026-03-10T13:43:17.071255+0000 mon.a (mon.0) 178 : cluster [DBG] election_strategy: 1 2026-03-10T13:43:22.751 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:43:22 vm07 bash[23044]: cluster 2026-03-10T13:43:17.071303+0000 mon.a (mon.0) 179 : cluster [DBG] 0: [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] mon.a 2026-03-10T13:43:22.751 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:43:22 vm07 bash[23044]: cluster 2026-03-10T13:43:17.071303+0000 mon.a (mon.0) 179 : cluster [DBG] 0: [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] mon.a 2026-03-10T13:43:22.751 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:43:22 vm07 bash[23044]: cluster 2026-03-10T13:43:17.071312+0000 mon.a (mon.0) 180 : cluster [DBG] 1: [v2:192.168.123.108:3300/0,v1:192.168.123.108:6789/0] mon.c 2026-03-10T13:43:22.751 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:43:22 vm07 bash[23044]: cluster 2026-03-10T13:43:17.071312+0000 mon.a (mon.0) 180 : cluster [DBG] 1: [v2:192.168.123.108:3300/0,v1:192.168.123.108:6789/0] mon.c 2026-03-10T13:43:22.751 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:43:22 vm07 bash[23044]: cluster 2026-03-10T13:43:17.071691+0000 mon.a (mon.0) 181 : cluster [DBG] fsmap 2026-03-10T13:43:22.751 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:43:22 vm07 bash[23044]: cluster 2026-03-10T13:43:17.071691+0000 mon.a (mon.0) 181 : cluster [DBG] fsmap 2026-03-10T13:43:22.751 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:43:22 vm07 bash[23044]: cluster 2026-03-10T13:43:17.071714+0000 mon.a (mon.0) 182 : cluster [DBG] osdmap e4: 0 total, 0 up, 0 in 2026-03-10T13:43:22.751 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:43:22 vm07 bash[23044]: cluster 2026-03-10T13:43:17.071714+0000 mon.a (mon.0) 182 : cluster [DBG] osdmap e4: 0 total, 0 up, 0 in 2026-03-10T13:43:22.751 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:43:22 vm07 bash[23044]: cluster 2026-03-10T13:43:17.071878+0000 mon.a (mon.0) 183 : cluster [DBG] mgrmap e12: a(active, since 46s) 2026-03-10T13:43:22.751 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:43:22 vm07 bash[23044]: cluster 2026-03-10T13:43:17.071878+0000 mon.a (mon.0) 183 : cluster [DBG] mgrmap e12: a(active, since 46s) 2026-03-10T13:43:22.751 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:43:22 vm07 bash[23044]: cluster 2026-03-10T13:43:17.071958+0000 mon.a (mon.0) 184 : cluster [INF] overall HEALTH_OK 2026-03-10T13:43:22.751 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:43:22 vm07 bash[23044]: cluster 2026-03-10T13:43:17.071958+0000 mon.a (mon.0) 184 : cluster [INF] overall HEALTH_OK 2026-03-10T13:43:22.751 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:43:22 vm07 bash[23044]: audit 2026-03-10T13:43:17.077591+0000 mon.a (mon.0) 185 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:43:22.751 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:43:22 vm07 bash[23044]: audit 2026-03-10T13:43:17.077591+0000 mon.a (mon.0) 185 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:43:22.751 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:43:22 vm07 bash[23044]: audit 2026-03-10T13:43:17.082074+0000 mon.a (mon.0) 186 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:43:22.751 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:43:22 vm07 bash[23044]: audit 2026-03-10T13:43:17.082074+0000 mon.a (mon.0) 186 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:43:22.751 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:43:22 vm07 bash[23044]: audit 2026-03-10T13:43:17.091895+0000 mon.a (mon.0) 187 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:43:22.751 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:43:22 vm07 bash[23044]: audit 2026-03-10T13:43:17.091895+0000 mon.a (mon.0) 187 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:43:22.751 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:43:22 vm07 bash[23044]: audit 2026-03-10T13:43:17.095745+0000 mon.a (mon.0) 188 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:43:22.751 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:43:22 vm07 bash[23044]: audit 2026-03-10T13:43:17.095745+0000 mon.a (mon.0) 188 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:43:22.751 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:43:22 vm07 bash[23044]: audit 2026-03-10T13:43:17.112744+0000 mon.a (mon.0) 189 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T13:43:22.751 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:43:22 vm07 bash[23044]: audit 2026-03-10T13:43:17.112744+0000 mon.a (mon.0) 189 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T13:43:22.751 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:43:22 vm07 bash[23044]: audit 2026-03-10T13:43:17.484668+0000 mon.a (mon.0) 191 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-10T13:43:22.751 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:43:22 vm07 bash[23044]: audit 2026-03-10T13:43:17.484668+0000 mon.a (mon.0) 191 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-10T13:43:22.751 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:43:22 vm07 bash[23044]: audit 2026-03-10T13:43:17.484865+0000 mon.a (mon.0) 192 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T13:43:22.751 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:43:22 vm07 bash[23044]: audit 2026-03-10T13:43:17.484865+0000 mon.a (mon.0) 192 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T13:43:22.751 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:43:22 vm07 bash[23044]: cluster 2026-03-10T13:43:17.484963+0000 mon.a (mon.0) 193 : cluster [INF] mon.a calling monitor election 2026-03-10T13:43:22.751 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:43:22 vm07 bash[23044]: cluster 2026-03-10T13:43:17.484963+0000 mon.a (mon.0) 193 : cluster [INF] mon.a calling monitor election 2026-03-10T13:43:22.751 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:43:22 vm07 bash[23044]: audit 2026-03-10T13:43:17.485756+0000 mon.a (mon.0) 194 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T13:43:22.752 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:43:22 vm07 bash[23044]: audit 2026-03-10T13:43:17.485756+0000 mon.a (mon.0) 194 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T13:43:22.752 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:43:22 vm07 bash[23044]: cluster 2026-03-10T13:43:17.485902+0000 mon.c (mon.1) 2 : cluster [INF] mon.c calling monitor election 2026-03-10T13:43:22.752 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:43:22 vm07 bash[23044]: cluster 2026-03-10T13:43:17.485902+0000 mon.c (mon.1) 2 : cluster [INF] mon.c calling monitor election 2026-03-10T13:43:22.752 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:43:22 vm07 bash[23044]: cluster 2026-03-10T13:43:18.202984+0000 mgr.a (mgr.14150) 49 : cluster [DBG] pgmap v18: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T13:43:22.752 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:43:22 vm07 bash[23044]: cluster 2026-03-10T13:43:18.202984+0000 mgr.a (mgr.14150) 49 : cluster [DBG] pgmap v18: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T13:43:22.752 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:43:22 vm07 bash[23044]: audit 2026-03-10T13:43:18.480589+0000 mon.a (mon.0) 195 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T13:43:22.752 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:43:22 vm07 bash[23044]: audit 2026-03-10T13:43:18.480589+0000 mon.a (mon.0) 195 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T13:43:22.752 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:43:22 vm07 bash[23044]: cluster 2026-03-10T13:43:19.481074+0000 mon.b (mon.2) 1 : cluster [INF] mon.b calling monitor election 2026-03-10T13:43:22.752 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:43:22 vm07 bash[23044]: cluster 2026-03-10T13:43:19.481074+0000 mon.b (mon.2) 1 : cluster [INF] mon.b calling monitor election 2026-03-10T13:43:22.752 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:43:22 vm07 bash[23044]: audit 2026-03-10T13:43:19.481151+0000 mon.a (mon.0) 196 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T13:43:22.752 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:43:22 vm07 bash[23044]: audit 2026-03-10T13:43:19.481151+0000 mon.a (mon.0) 196 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T13:43:22.752 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:43:22 vm07 bash[23044]: cluster 2026-03-10T13:43:20.203142+0000 mgr.a (mgr.14150) 50 : cluster [DBG] pgmap v19: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T13:43:22.752 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:43:22 vm07 bash[23044]: cluster 2026-03-10T13:43:20.203142+0000 mgr.a (mgr.14150) 50 : cluster [DBG] pgmap v19: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T13:43:22.752 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:43:22 vm07 bash[23044]: audit 2026-03-10T13:43:20.480694+0000 mon.a (mon.0) 197 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T13:43:22.752 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:43:22 vm07 bash[23044]: audit 2026-03-10T13:43:20.480694+0000 mon.a (mon.0) 197 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T13:43:22.752 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:43:22 vm07 bash[23044]: audit 2026-03-10T13:43:21.480921+0000 mon.a (mon.0) 198 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T13:43:22.752 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:43:22 vm07 bash[23044]: audit 2026-03-10T13:43:21.480921+0000 mon.a (mon.0) 198 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T13:43:22.752 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:43:22 vm07 bash[23044]: audit 2026-03-10T13:43:22.481228+0000 mon.a (mon.0) 199 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T13:43:22.752 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:43:22 vm07 bash[23044]: audit 2026-03-10T13:43:22.481228+0000 mon.a (mon.0) 199 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T13:43:22.752 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:43:22 vm07 bash[23044]: cluster 2026-03-10T13:43:22.489231+0000 mon.a (mon.0) 200 : cluster [INF] mon.a is new leader, mons a,c,b in quorum (ranks 0,1,2) 2026-03-10T13:43:22.752 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:43:22 vm07 bash[23044]: cluster 2026-03-10T13:43:22.489231+0000 mon.a (mon.0) 200 : cluster [INF] mon.a is new leader, mons a,c,b in quorum (ranks 0,1,2) 2026-03-10T13:43:22.752 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:43:22 vm07 bash[23044]: cluster 2026-03-10T13:43:22.492921+0000 mon.a (mon.0) 201 : cluster [DBG] monmap epoch 3 2026-03-10T13:43:22.752 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:43:22 vm07 bash[23044]: cluster 2026-03-10T13:43:22.492921+0000 mon.a (mon.0) 201 : cluster [DBG] monmap epoch 3 2026-03-10T13:43:22.752 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:43:22 vm07 bash[23044]: cluster 2026-03-10T13:43:22.492945+0000 mon.a (mon.0) 202 : cluster [DBG] fsid c9620084-1c86-11f1-bcc5-e3fb709eab0a 2026-03-10T13:43:22.752 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:43:22 vm07 bash[23044]: cluster 2026-03-10T13:43:22.492945+0000 mon.a (mon.0) 202 : cluster [DBG] fsid c9620084-1c86-11f1-bcc5-e3fb709eab0a 2026-03-10T13:43:22.752 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:43:22 vm07 bash[23044]: cluster 2026-03-10T13:43:22.492955+0000 mon.a (mon.0) 203 : cluster [DBG] last_changed 2026-03-10T13:43:17.480839+0000 2026-03-10T13:43:22.752 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:43:22 vm07 bash[23044]: cluster 2026-03-10T13:43:22.492955+0000 mon.a (mon.0) 203 : cluster [DBG] last_changed 2026-03-10T13:43:17.480839+0000 2026-03-10T13:43:22.752 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:43:22 vm07 bash[23044]: cluster 2026-03-10T13:43:22.492964+0000 mon.a (mon.0) 204 : cluster [DBG] created 2026-03-10T13:42:07.014183+0000 2026-03-10T13:43:22.752 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:43:22 vm07 bash[23044]: cluster 2026-03-10T13:43:22.492964+0000 mon.a (mon.0) 204 : cluster [DBG] created 2026-03-10T13:42:07.014183+0000 2026-03-10T13:43:22.752 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:43:22 vm07 bash[23044]: cluster 2026-03-10T13:43:22.492973+0000 mon.a (mon.0) 205 : cluster [DBG] min_mon_release 19 (squid) 2026-03-10T13:43:22.752 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:43:22 vm07 bash[23044]: cluster 2026-03-10T13:43:22.492973+0000 mon.a (mon.0) 205 : cluster [DBG] min_mon_release 19 (squid) 2026-03-10T13:43:22.752 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:43:22 vm07 bash[23044]: cluster 2026-03-10T13:43:22.492982+0000 mon.a (mon.0) 206 : cluster [DBG] election_strategy: 1 2026-03-10T13:43:22.752 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:43:22 vm07 bash[23044]: cluster 2026-03-10T13:43:22.492982+0000 mon.a (mon.0) 206 : cluster [DBG] election_strategy: 1 2026-03-10T13:43:22.752 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:43:22 vm07 bash[23044]: cluster 2026-03-10T13:43:22.492991+0000 mon.a (mon.0) 207 : cluster [DBG] 0: [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] mon.a 2026-03-10T13:43:22.752 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:43:22 vm07 bash[23044]: cluster 2026-03-10T13:43:22.492991+0000 mon.a (mon.0) 207 : cluster [DBG] 0: [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] mon.a 2026-03-10T13:43:22.752 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:43:22 vm07 bash[23044]: cluster 2026-03-10T13:43:22.493000+0000 mon.a (mon.0) 208 : cluster [DBG] 1: [v2:192.168.123.108:3300/0,v1:192.168.123.108:6789/0] mon.c 2026-03-10T13:43:22.752 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:43:22 vm07 bash[23044]: cluster 2026-03-10T13:43:22.493000+0000 mon.a (mon.0) 208 : cluster [DBG] 1: [v2:192.168.123.108:3300/0,v1:192.168.123.108:6789/0] mon.c 2026-03-10T13:43:22.752 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:43:22 vm07 bash[23044]: cluster 2026-03-10T13:43:22.493009+0000 mon.a (mon.0) 209 : cluster [DBG] 2: [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] mon.b 2026-03-10T13:43:22.752 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:43:22 vm07 bash[23044]: cluster 2026-03-10T13:43:22.493009+0000 mon.a (mon.0) 209 : cluster [DBG] 2: [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] mon.b 2026-03-10T13:43:22.752 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:43:22 vm07 bash[23044]: cluster 2026-03-10T13:43:22.493330+0000 mon.a (mon.0) 210 : cluster [DBG] fsmap 2026-03-10T13:43:22.752 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:43:22 vm07 bash[23044]: cluster 2026-03-10T13:43:22.493330+0000 mon.a (mon.0) 210 : cluster [DBG] fsmap 2026-03-10T13:43:22.752 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:43:22 vm07 bash[23044]: cluster 2026-03-10T13:43:22.493349+0000 mon.a (mon.0) 211 : cluster [DBG] osdmap e4: 0 total, 0 up, 0 in 2026-03-10T13:43:22.752 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:43:22 vm07 bash[23044]: cluster 2026-03-10T13:43:22.493349+0000 mon.a (mon.0) 211 : cluster [DBG] osdmap e4: 0 total, 0 up, 0 in 2026-03-10T13:43:22.752 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:43:22 vm07 bash[23044]: cluster 2026-03-10T13:43:22.493467+0000 mon.a (mon.0) 212 : cluster [DBG] mgrmap e12: a(active, since 52s) 2026-03-10T13:43:22.752 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:43:22 vm07 bash[23044]: cluster 2026-03-10T13:43:22.493467+0000 mon.a (mon.0) 212 : cluster [DBG] mgrmap e12: a(active, since 52s) 2026-03-10T13:43:22.752 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:43:22 vm07 bash[23044]: cluster 2026-03-10T13:43:22.493540+0000 mon.a (mon.0) 213 : cluster [INF] overall HEALTH_OK 2026-03-10T13:43:22.753 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:43:22 vm07 bash[23044]: cluster 2026-03-10T13:43:22.493540+0000 mon.a (mon.0) 213 : cluster [INF] overall HEALTH_OK 2026-03-10T13:43:22.753 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:43:22 vm07 bash[23044]: audit 2026-03-10T13:43:22.499431+0000 mon.a (mon.0) 214 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:43:22.753 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:43:22 vm07 bash[23044]: audit 2026-03-10T13:43:22.499431+0000 mon.a (mon.0) 214 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:43:22.753 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:43:22 vm07 bash[23044]: audit 2026-03-10T13:43:22.503932+0000 mon.a (mon.0) 215 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:43:22.753 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:43:22 vm07 bash[23044]: audit 2026-03-10T13:43:22.503932+0000 mon.a (mon.0) 215 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:43:22.753 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:43:22 vm07 bash[23044]: audit 2026-03-10T13:43:22.507776+0000 mon.a (mon.0) 216 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:43:22.753 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:43:22 vm07 bash[23044]: audit 2026-03-10T13:43:22.507776+0000 mon.a (mon.0) 216 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:43:22.753 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:43:22 vm07 bash[23044]: audit 2026-03-10T13:43:22.510812+0000 mon.a (mon.0) 217 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:43:22.753 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:43:22 vm07 bash[23044]: audit 2026-03-10T13:43:22.510812+0000 mon.a (mon.0) 217 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:43:22.753 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:43:22 vm07 bash[23044]: audit 2026-03-10T13:43:22.513462+0000 mon.a (mon.0) 218 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:43:22.753 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:43:22 vm07 bash[23044]: audit 2026-03-10T13:43:22.513462+0000 mon.a (mon.0) 218 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:43:22.753 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:43:22 vm07 bash[23044]: audit 2026-03-10T13:43:22.514094+0000 mon.a (mon.0) 219 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T13:43:22.753 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:43:22 vm07 bash[23044]: audit 2026-03-10T13:43:22.514094+0000 mon.a (mon.0) 219 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T13:43:22.753 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:43:22 vm07 bash[23044]: audit 2026-03-10T13:43:22.514559+0000 mon.a (mon.0) 220 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T13:43:22.753 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:43:22 vm07 bash[23044]: audit 2026-03-10T13:43:22.514559+0000 mon.a (mon.0) 220 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T13:43:22.753 INFO:teuthology.orchestra.run.vm08.stdout: 2026-03-10T13:43:22.753 INFO:teuthology.orchestra.run.vm08.stdout:{"epoch":3,"fsid":"c9620084-1c86-11f1-bcc5-e3fb709eab0a","modified":"2026-03-10T13:43:17.480839Z","created":"2026-03-10T13:42:07.014183Z","min_mon_release":19,"min_mon_release_name":"squid","election_strategy":1,"disallowed_leaders":"","stretch_mode":false,"tiebreaker_mon":"","removed_ranks":"","features":{"persistent":["kraken","luminous","mimic","osdmap-prune","nautilus","octopus","pacific","elector-pinging","quincy","reef","squid"],"optional":[]},"mons":[{"rank":0,"name":"a","public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.100:3300","nonce":0},{"type":"v1","addr":"192.168.123.100:6789","nonce":0}]},"addr":"192.168.123.100:6789/0","public_addr":"192.168.123.100:6789/0","priority":0,"weight":0,"crush_location":"{}"},{"rank":1,"name":"c","public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.108:3300","nonce":0},{"type":"v1","addr":"192.168.123.108:6789","nonce":0}]},"addr":"192.168.123.108:6789/0","public_addr":"192.168.123.108:6789/0","priority":0,"weight":0,"crush_location":"{}"},{"rank":2,"name":"b","public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.107:3300","nonce":0},{"type":"v1","addr":"192.168.123.107:6789","nonce":0}]},"addr":"192.168.123.107:6789/0","public_addr":"192.168.123.107:6789/0","priority":0,"weight":0,"crush_location":"{}"}],"quorum":[0,1,2]} 2026-03-10T13:43:22.753 INFO:teuthology.orchestra.run.vm08.stderr:dumped monmap epoch 3 2026-03-10T13:43:22.800 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:22 vm08 bash[23387]: audit 2026-03-10T13:43:17.484668+0000 mon.a (mon.0) 191 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-10T13:43:22.800 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:22 vm08 bash[23387]: audit 2026-03-10T13:43:17.484668+0000 mon.a (mon.0) 191 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-10T13:43:22.800 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:22 vm08 bash[23387]: audit 2026-03-10T13:43:17.484865+0000 mon.a (mon.0) 192 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T13:43:22.800 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:22 vm08 bash[23387]: audit 2026-03-10T13:43:17.484865+0000 mon.a (mon.0) 192 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T13:43:22.800 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:22 vm08 bash[23387]: cluster 2026-03-10T13:43:17.484963+0000 mon.a (mon.0) 193 : cluster [INF] mon.a calling monitor election 2026-03-10T13:43:22.800 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:22 vm08 bash[23387]: cluster 2026-03-10T13:43:17.484963+0000 mon.a (mon.0) 193 : cluster [INF] mon.a calling monitor election 2026-03-10T13:43:22.800 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:22 vm08 bash[23387]: audit 2026-03-10T13:43:17.485756+0000 mon.a (mon.0) 194 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T13:43:22.800 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:22 vm08 bash[23387]: audit 2026-03-10T13:43:17.485756+0000 mon.a (mon.0) 194 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T13:43:22.800 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:22 vm08 bash[23387]: cluster 2026-03-10T13:43:17.485902+0000 mon.c (mon.1) 2 : cluster [INF] mon.c calling monitor election 2026-03-10T13:43:22.800 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:22 vm08 bash[23387]: cluster 2026-03-10T13:43:17.485902+0000 mon.c (mon.1) 2 : cluster [INF] mon.c calling monitor election 2026-03-10T13:43:22.800 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:22 vm08 bash[23387]: cluster 2026-03-10T13:43:18.202984+0000 mgr.a (mgr.14150) 49 : cluster [DBG] pgmap v18: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T13:43:22.800 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:22 vm08 bash[23387]: cluster 2026-03-10T13:43:18.202984+0000 mgr.a (mgr.14150) 49 : cluster [DBG] pgmap v18: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T13:43:22.800 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:22 vm08 bash[23387]: audit 2026-03-10T13:43:18.480589+0000 mon.a (mon.0) 195 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T13:43:22.800 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:22 vm08 bash[23387]: audit 2026-03-10T13:43:18.480589+0000 mon.a (mon.0) 195 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T13:43:22.800 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:22 vm08 bash[23387]: cluster 2026-03-10T13:43:19.481074+0000 mon.b (mon.2) 1 : cluster [INF] mon.b calling monitor election 2026-03-10T13:43:22.800 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:22 vm08 bash[23387]: cluster 2026-03-10T13:43:19.481074+0000 mon.b (mon.2) 1 : cluster [INF] mon.b calling monitor election 2026-03-10T13:43:22.800 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:22 vm08 bash[23387]: audit 2026-03-10T13:43:19.481151+0000 mon.a (mon.0) 196 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T13:43:22.800 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:22 vm08 bash[23387]: audit 2026-03-10T13:43:19.481151+0000 mon.a (mon.0) 196 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T13:43:22.800 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:22 vm08 bash[23387]: cluster 2026-03-10T13:43:20.203142+0000 mgr.a (mgr.14150) 50 : cluster [DBG] pgmap v19: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T13:43:22.800 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:22 vm08 bash[23387]: cluster 2026-03-10T13:43:20.203142+0000 mgr.a (mgr.14150) 50 : cluster [DBG] pgmap v19: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T13:43:22.800 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:22 vm08 bash[23387]: audit 2026-03-10T13:43:20.480694+0000 mon.a (mon.0) 197 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T13:43:22.800 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:22 vm08 bash[23387]: audit 2026-03-10T13:43:20.480694+0000 mon.a (mon.0) 197 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T13:43:22.800 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:22 vm08 bash[23387]: audit 2026-03-10T13:43:21.480921+0000 mon.a (mon.0) 198 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T13:43:22.800 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:22 vm08 bash[23387]: audit 2026-03-10T13:43:21.480921+0000 mon.a (mon.0) 198 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T13:43:22.800 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:22 vm08 bash[23387]: audit 2026-03-10T13:43:22.481228+0000 mon.a (mon.0) 199 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T13:43:22.800 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:22 vm08 bash[23387]: audit 2026-03-10T13:43:22.481228+0000 mon.a (mon.0) 199 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T13:43:22.800 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:22 vm08 bash[23387]: cluster 2026-03-10T13:43:22.489231+0000 mon.a (mon.0) 200 : cluster [INF] mon.a is new leader, mons a,c,b in quorum (ranks 0,1,2) 2026-03-10T13:43:22.801 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:22 vm08 bash[23387]: cluster 2026-03-10T13:43:22.489231+0000 mon.a (mon.0) 200 : cluster [INF] mon.a is new leader, mons a,c,b in quorum (ranks 0,1,2) 2026-03-10T13:43:22.801 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:22 vm08 bash[23387]: cluster 2026-03-10T13:43:22.492921+0000 mon.a (mon.0) 201 : cluster [DBG] monmap epoch 3 2026-03-10T13:43:22.801 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:22 vm08 bash[23387]: cluster 2026-03-10T13:43:22.492921+0000 mon.a (mon.0) 201 : cluster [DBG] monmap epoch 3 2026-03-10T13:43:22.801 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:22 vm08 bash[23387]: cluster 2026-03-10T13:43:22.492945+0000 mon.a (mon.0) 202 : cluster [DBG] fsid c9620084-1c86-11f1-bcc5-e3fb709eab0a 2026-03-10T13:43:22.801 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:22 vm08 bash[23387]: cluster 2026-03-10T13:43:22.492945+0000 mon.a (mon.0) 202 : cluster [DBG] fsid c9620084-1c86-11f1-bcc5-e3fb709eab0a 2026-03-10T13:43:22.801 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:22 vm08 bash[23387]: cluster 2026-03-10T13:43:22.492955+0000 mon.a (mon.0) 203 : cluster [DBG] last_changed 2026-03-10T13:43:17.480839+0000 2026-03-10T13:43:22.801 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:22 vm08 bash[23387]: cluster 2026-03-10T13:43:22.492955+0000 mon.a (mon.0) 203 : cluster [DBG] last_changed 2026-03-10T13:43:17.480839+0000 2026-03-10T13:43:22.801 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:22 vm08 bash[23387]: cluster 2026-03-10T13:43:22.492964+0000 mon.a (mon.0) 204 : cluster [DBG] created 2026-03-10T13:42:07.014183+0000 2026-03-10T13:43:22.801 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:22 vm08 bash[23387]: cluster 2026-03-10T13:43:22.492964+0000 mon.a (mon.0) 204 : cluster [DBG] created 2026-03-10T13:42:07.014183+0000 2026-03-10T13:43:22.801 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:22 vm08 bash[23387]: cluster 2026-03-10T13:43:22.492973+0000 mon.a (mon.0) 205 : cluster [DBG] min_mon_release 19 (squid) 2026-03-10T13:43:22.801 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:22 vm08 bash[23387]: cluster 2026-03-10T13:43:22.492973+0000 mon.a (mon.0) 205 : cluster [DBG] min_mon_release 19 (squid) 2026-03-10T13:43:22.801 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:22 vm08 bash[23387]: cluster 2026-03-10T13:43:22.492982+0000 mon.a (mon.0) 206 : cluster [DBG] election_strategy: 1 2026-03-10T13:43:22.801 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:22 vm08 bash[23387]: cluster 2026-03-10T13:43:22.492982+0000 mon.a (mon.0) 206 : cluster [DBG] election_strategy: 1 2026-03-10T13:43:22.801 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:22 vm08 bash[23387]: cluster 2026-03-10T13:43:22.492991+0000 mon.a (mon.0) 207 : cluster [DBG] 0: [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] mon.a 2026-03-10T13:43:22.801 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:22 vm08 bash[23387]: cluster 2026-03-10T13:43:22.492991+0000 mon.a (mon.0) 207 : cluster [DBG] 0: [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] mon.a 2026-03-10T13:43:22.801 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:22 vm08 bash[23387]: cluster 2026-03-10T13:43:22.493000+0000 mon.a (mon.0) 208 : cluster [DBG] 1: [v2:192.168.123.108:3300/0,v1:192.168.123.108:6789/0] mon.c 2026-03-10T13:43:22.801 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:22 vm08 bash[23387]: cluster 2026-03-10T13:43:22.493000+0000 mon.a (mon.0) 208 : cluster [DBG] 1: [v2:192.168.123.108:3300/0,v1:192.168.123.108:6789/0] mon.c 2026-03-10T13:43:22.801 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:22 vm08 bash[23387]: cluster 2026-03-10T13:43:22.493009+0000 mon.a (mon.0) 209 : cluster [DBG] 2: [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] mon.b 2026-03-10T13:43:22.801 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:22 vm08 bash[23387]: cluster 2026-03-10T13:43:22.493009+0000 mon.a (mon.0) 209 : cluster [DBG] 2: [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] mon.b 2026-03-10T13:43:22.801 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:22 vm08 bash[23387]: cluster 2026-03-10T13:43:22.493330+0000 mon.a (mon.0) 210 : cluster [DBG] fsmap 2026-03-10T13:43:22.801 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:22 vm08 bash[23387]: cluster 2026-03-10T13:43:22.493330+0000 mon.a (mon.0) 210 : cluster [DBG] fsmap 2026-03-10T13:43:22.801 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:22 vm08 bash[23387]: cluster 2026-03-10T13:43:22.493349+0000 mon.a (mon.0) 211 : cluster [DBG] osdmap e4: 0 total, 0 up, 0 in 2026-03-10T13:43:22.801 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:22 vm08 bash[23387]: cluster 2026-03-10T13:43:22.493349+0000 mon.a (mon.0) 211 : cluster [DBG] osdmap e4: 0 total, 0 up, 0 in 2026-03-10T13:43:22.801 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:22 vm08 bash[23387]: cluster 2026-03-10T13:43:22.493467+0000 mon.a (mon.0) 212 : cluster [DBG] mgrmap e12: a(active, since 52s) 2026-03-10T13:43:22.801 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:22 vm08 bash[23387]: cluster 2026-03-10T13:43:22.493467+0000 mon.a (mon.0) 212 : cluster [DBG] mgrmap e12: a(active, since 52s) 2026-03-10T13:43:22.801 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:22 vm08 bash[23387]: cluster 2026-03-10T13:43:22.493540+0000 mon.a (mon.0) 213 : cluster [INF] overall HEALTH_OK 2026-03-10T13:43:22.801 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:22 vm08 bash[23387]: cluster 2026-03-10T13:43:22.493540+0000 mon.a (mon.0) 213 : cluster [INF] overall HEALTH_OK 2026-03-10T13:43:22.801 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:22 vm08 bash[23387]: audit 2026-03-10T13:43:22.499431+0000 mon.a (mon.0) 214 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:43:22.801 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:22 vm08 bash[23387]: audit 2026-03-10T13:43:22.499431+0000 mon.a (mon.0) 214 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:43:22.801 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:22 vm08 bash[23387]: audit 2026-03-10T13:43:22.503932+0000 mon.a (mon.0) 215 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:43:22.801 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:22 vm08 bash[23387]: audit 2026-03-10T13:43:22.503932+0000 mon.a (mon.0) 215 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:43:22.801 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:22 vm08 bash[23387]: audit 2026-03-10T13:43:22.507776+0000 mon.a (mon.0) 216 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:43:22.801 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:22 vm08 bash[23387]: audit 2026-03-10T13:43:22.507776+0000 mon.a (mon.0) 216 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:43:22.801 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:22 vm08 bash[23387]: audit 2026-03-10T13:43:22.510812+0000 mon.a (mon.0) 217 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:43:22.801 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:22 vm08 bash[23387]: audit 2026-03-10T13:43:22.510812+0000 mon.a (mon.0) 217 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:43:22.801 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:22 vm08 bash[23387]: audit 2026-03-10T13:43:22.513462+0000 mon.a (mon.0) 218 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:43:22.801 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:22 vm08 bash[23387]: audit 2026-03-10T13:43:22.513462+0000 mon.a (mon.0) 218 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:43:22.801 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:22 vm08 bash[23387]: audit 2026-03-10T13:43:22.514094+0000 mon.a (mon.0) 219 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T13:43:22.801 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:22 vm08 bash[23387]: audit 2026-03-10T13:43:22.514094+0000 mon.a (mon.0) 219 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T13:43:22.801 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:22 vm08 bash[23387]: audit 2026-03-10T13:43:22.514559+0000 mon.a (mon.0) 220 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T13:43:22.801 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:22 vm08 bash[23387]: audit 2026-03-10T13:43:22.514559+0000 mon.a (mon.0) 220 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T13:43:22.801 INFO:tasks.cephadm:Generating final ceph.conf file... 2026-03-10T13:43:22.802 DEBUG:teuthology.orchestra.run.vm00:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid c9620084-1c86-11f1-bcc5-e3fb709eab0a -- ceph config generate-minimal-conf 2026-03-10T13:43:22.808 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:43:22 vm00 bash[20748]: audit 2026-03-10T13:43:17.484668+0000 mon.a (mon.0) 191 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-10T13:43:22.808 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:43:22 vm00 bash[20748]: audit 2026-03-10T13:43:17.484668+0000 mon.a (mon.0) 191 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-10T13:43:22.808 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:43:22 vm00 bash[20748]: audit 2026-03-10T13:43:17.484865+0000 mon.a (mon.0) 192 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T13:43:22.808 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:43:22 vm00 bash[20748]: audit 2026-03-10T13:43:17.484865+0000 mon.a (mon.0) 192 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T13:43:22.808 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:43:22 vm00 bash[20748]: cluster 2026-03-10T13:43:17.484963+0000 mon.a (mon.0) 193 : cluster [INF] mon.a calling monitor election 2026-03-10T13:43:22.808 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:43:22 vm00 bash[20748]: cluster 2026-03-10T13:43:17.484963+0000 mon.a (mon.0) 193 : cluster [INF] mon.a calling monitor election 2026-03-10T13:43:22.808 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:43:22 vm00 bash[20748]: audit 2026-03-10T13:43:17.485756+0000 mon.a (mon.0) 194 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T13:43:22.808 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:43:22 vm00 bash[20748]: audit 2026-03-10T13:43:17.485756+0000 mon.a (mon.0) 194 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T13:43:22.808 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:43:22 vm00 bash[20748]: cluster 2026-03-10T13:43:17.485902+0000 mon.c (mon.1) 2 : cluster [INF] mon.c calling monitor election 2026-03-10T13:43:22.808 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:43:22 vm00 bash[20748]: cluster 2026-03-10T13:43:17.485902+0000 mon.c (mon.1) 2 : cluster [INF] mon.c calling monitor election 2026-03-10T13:43:22.808 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:43:22 vm00 bash[20748]: cluster 2026-03-10T13:43:18.202984+0000 mgr.a (mgr.14150) 49 : cluster [DBG] pgmap v18: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T13:43:22.808 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:43:22 vm00 bash[20748]: cluster 2026-03-10T13:43:18.202984+0000 mgr.a (mgr.14150) 49 : cluster [DBG] pgmap v18: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T13:43:22.808 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:43:22 vm00 bash[20748]: audit 2026-03-10T13:43:18.480589+0000 mon.a (mon.0) 195 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T13:43:22.808 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:43:22 vm00 bash[20748]: audit 2026-03-10T13:43:18.480589+0000 mon.a (mon.0) 195 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T13:43:22.808 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:43:22 vm00 bash[20748]: cluster 2026-03-10T13:43:19.481074+0000 mon.b (mon.2) 1 : cluster [INF] mon.b calling monitor election 2026-03-10T13:43:22.808 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:43:22 vm00 bash[20748]: cluster 2026-03-10T13:43:19.481074+0000 mon.b (mon.2) 1 : cluster [INF] mon.b calling monitor election 2026-03-10T13:43:22.808 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:43:22 vm00 bash[20748]: audit 2026-03-10T13:43:19.481151+0000 mon.a (mon.0) 196 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T13:43:22.808 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:43:22 vm00 bash[20748]: audit 2026-03-10T13:43:19.481151+0000 mon.a (mon.0) 196 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T13:43:22.808 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:43:22 vm00 bash[20748]: cluster 2026-03-10T13:43:20.203142+0000 mgr.a (mgr.14150) 50 : cluster [DBG] pgmap v19: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T13:43:22.808 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:43:22 vm00 bash[20748]: cluster 2026-03-10T13:43:20.203142+0000 mgr.a (mgr.14150) 50 : cluster [DBG] pgmap v19: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T13:43:22.808 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:43:22 vm00 bash[20748]: audit 2026-03-10T13:43:20.480694+0000 mon.a (mon.0) 197 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T13:43:22.808 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:43:22 vm00 bash[20748]: audit 2026-03-10T13:43:20.480694+0000 mon.a (mon.0) 197 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T13:43:22.808 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:43:22 vm00 bash[20748]: audit 2026-03-10T13:43:21.480921+0000 mon.a (mon.0) 198 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T13:43:22.808 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:43:22 vm00 bash[20748]: audit 2026-03-10T13:43:21.480921+0000 mon.a (mon.0) 198 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T13:43:22.808 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:43:22 vm00 bash[20748]: audit 2026-03-10T13:43:22.481228+0000 mon.a (mon.0) 199 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T13:43:22.808 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:43:22 vm00 bash[20748]: audit 2026-03-10T13:43:22.481228+0000 mon.a (mon.0) 199 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T13:43:22.808 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:43:22 vm00 bash[20748]: cluster 2026-03-10T13:43:22.489231+0000 mon.a (mon.0) 200 : cluster [INF] mon.a is new leader, mons a,c,b in quorum (ranks 0,1,2) 2026-03-10T13:43:22.808 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:43:22 vm00 bash[20748]: cluster 2026-03-10T13:43:22.489231+0000 mon.a (mon.0) 200 : cluster [INF] mon.a is new leader, mons a,c,b in quorum (ranks 0,1,2) 2026-03-10T13:43:22.808 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:43:22 vm00 bash[20748]: cluster 2026-03-10T13:43:22.492921+0000 mon.a (mon.0) 201 : cluster [DBG] monmap epoch 3 2026-03-10T13:43:22.808 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:43:22 vm00 bash[20748]: cluster 2026-03-10T13:43:22.492921+0000 mon.a (mon.0) 201 : cluster [DBG] monmap epoch 3 2026-03-10T13:43:22.808 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:43:22 vm00 bash[20748]: cluster 2026-03-10T13:43:22.492945+0000 mon.a (mon.0) 202 : cluster [DBG] fsid c9620084-1c86-11f1-bcc5-e3fb709eab0a 2026-03-10T13:43:22.808 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:43:22 vm00 bash[20748]: cluster 2026-03-10T13:43:22.492945+0000 mon.a (mon.0) 202 : cluster [DBG] fsid c9620084-1c86-11f1-bcc5-e3fb709eab0a 2026-03-10T13:43:22.808 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:43:22 vm00 bash[20748]: cluster 2026-03-10T13:43:22.492955+0000 mon.a (mon.0) 203 : cluster [DBG] last_changed 2026-03-10T13:43:17.480839+0000 2026-03-10T13:43:22.808 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:43:22 vm00 bash[20748]: cluster 2026-03-10T13:43:22.492955+0000 mon.a (mon.0) 203 : cluster [DBG] last_changed 2026-03-10T13:43:17.480839+0000 2026-03-10T13:43:22.808 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:43:22 vm00 bash[20748]: cluster 2026-03-10T13:43:22.492964+0000 mon.a (mon.0) 204 : cluster [DBG] created 2026-03-10T13:42:07.014183+0000 2026-03-10T13:43:22.808 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:43:22 vm00 bash[20748]: cluster 2026-03-10T13:43:22.492964+0000 mon.a (mon.0) 204 : cluster [DBG] created 2026-03-10T13:42:07.014183+0000 2026-03-10T13:43:22.808 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:43:22 vm00 bash[20748]: cluster 2026-03-10T13:43:22.492973+0000 mon.a (mon.0) 205 : cluster [DBG] min_mon_release 19 (squid) 2026-03-10T13:43:22.808 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:43:22 vm00 bash[20748]: cluster 2026-03-10T13:43:22.492973+0000 mon.a (mon.0) 205 : cluster [DBG] min_mon_release 19 (squid) 2026-03-10T13:43:22.808 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:43:22 vm00 bash[20748]: cluster 2026-03-10T13:43:22.492982+0000 mon.a (mon.0) 206 : cluster [DBG] election_strategy: 1 2026-03-10T13:43:22.808 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:43:22 vm00 bash[20748]: cluster 2026-03-10T13:43:22.492982+0000 mon.a (mon.0) 206 : cluster [DBG] election_strategy: 1 2026-03-10T13:43:22.808 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:43:22 vm00 bash[20748]: cluster 2026-03-10T13:43:22.492991+0000 mon.a (mon.0) 207 : cluster [DBG] 0: [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] mon.a 2026-03-10T13:43:22.809 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:43:22 vm00 bash[20748]: cluster 2026-03-10T13:43:22.492991+0000 mon.a (mon.0) 207 : cluster [DBG] 0: [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] mon.a 2026-03-10T13:43:22.809 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:43:22 vm00 bash[20748]: cluster 2026-03-10T13:43:22.493000+0000 mon.a (mon.0) 208 : cluster [DBG] 1: [v2:192.168.123.108:3300/0,v1:192.168.123.108:6789/0] mon.c 2026-03-10T13:43:22.809 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:43:22 vm00 bash[20748]: cluster 2026-03-10T13:43:22.493000+0000 mon.a (mon.0) 208 : cluster [DBG] 1: [v2:192.168.123.108:3300/0,v1:192.168.123.108:6789/0] mon.c 2026-03-10T13:43:22.809 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:43:22 vm00 bash[20748]: cluster 2026-03-10T13:43:22.493009+0000 mon.a (mon.0) 209 : cluster [DBG] 2: [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] mon.b 2026-03-10T13:43:22.809 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:43:22 vm00 bash[20748]: cluster 2026-03-10T13:43:22.493009+0000 mon.a (mon.0) 209 : cluster [DBG] 2: [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] mon.b 2026-03-10T13:43:22.809 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:43:22 vm00 bash[20748]: cluster 2026-03-10T13:43:22.493330+0000 mon.a (mon.0) 210 : cluster [DBG] fsmap 2026-03-10T13:43:22.809 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:43:22 vm00 bash[20748]: cluster 2026-03-10T13:43:22.493330+0000 mon.a (mon.0) 210 : cluster [DBG] fsmap 2026-03-10T13:43:22.809 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:43:22 vm00 bash[20748]: cluster 2026-03-10T13:43:22.493349+0000 mon.a (mon.0) 211 : cluster [DBG] osdmap e4: 0 total, 0 up, 0 in 2026-03-10T13:43:22.809 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:43:22 vm00 bash[20748]: cluster 2026-03-10T13:43:22.493349+0000 mon.a (mon.0) 211 : cluster [DBG] osdmap e4: 0 total, 0 up, 0 in 2026-03-10T13:43:22.809 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:43:22 vm00 bash[20748]: cluster 2026-03-10T13:43:22.493467+0000 mon.a (mon.0) 212 : cluster [DBG] mgrmap e12: a(active, since 52s) 2026-03-10T13:43:22.809 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:43:22 vm00 bash[20748]: cluster 2026-03-10T13:43:22.493467+0000 mon.a (mon.0) 212 : cluster [DBG] mgrmap e12: a(active, since 52s) 2026-03-10T13:43:22.809 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:43:22 vm00 bash[20748]: cluster 2026-03-10T13:43:22.493540+0000 mon.a (mon.0) 213 : cluster [INF] overall HEALTH_OK 2026-03-10T13:43:22.809 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:43:22 vm00 bash[20748]: cluster 2026-03-10T13:43:22.493540+0000 mon.a (mon.0) 213 : cluster [INF] overall HEALTH_OK 2026-03-10T13:43:22.809 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:43:22 vm00 bash[20748]: audit 2026-03-10T13:43:22.499431+0000 mon.a (mon.0) 214 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:43:22.809 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:43:22 vm00 bash[20748]: audit 2026-03-10T13:43:22.499431+0000 mon.a (mon.0) 214 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:43:22.809 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:43:22 vm00 bash[20748]: audit 2026-03-10T13:43:22.503932+0000 mon.a (mon.0) 215 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:43:22.809 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:43:22 vm00 bash[20748]: audit 2026-03-10T13:43:22.503932+0000 mon.a (mon.0) 215 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:43:22.809 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:43:22 vm00 bash[20748]: audit 2026-03-10T13:43:22.507776+0000 mon.a (mon.0) 216 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:43:22.809 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:43:22 vm00 bash[20748]: audit 2026-03-10T13:43:22.507776+0000 mon.a (mon.0) 216 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:43:22.809 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:43:22 vm00 bash[20748]: audit 2026-03-10T13:43:22.510812+0000 mon.a (mon.0) 217 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:43:22.809 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:43:22 vm00 bash[20748]: audit 2026-03-10T13:43:22.510812+0000 mon.a (mon.0) 217 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:43:22.809 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:43:22 vm00 bash[20748]: audit 2026-03-10T13:43:22.513462+0000 mon.a (mon.0) 218 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:43:22.809 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:43:22 vm00 bash[20748]: audit 2026-03-10T13:43:22.513462+0000 mon.a (mon.0) 218 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:43:22.809 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:43:22 vm00 bash[20748]: audit 2026-03-10T13:43:22.514094+0000 mon.a (mon.0) 219 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T13:43:22.809 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:43:22 vm00 bash[20748]: audit 2026-03-10T13:43:22.514094+0000 mon.a (mon.0) 219 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T13:43:22.809 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:43:22 vm00 bash[20748]: audit 2026-03-10T13:43:22.514559+0000 mon.a (mon.0) 220 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T13:43:22.809 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:43:22 vm00 bash[20748]: audit 2026-03-10T13:43:22.514559+0000 mon.a (mon.0) 220 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T13:43:23.920 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:43:23 vm00 bash[20748]: cluster 2026-03-10T13:43:22.203330+0000 mgr.a (mgr.14150) 51 : cluster [DBG] pgmap v20: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T13:43:23.920 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:43:23 vm00 bash[20748]: cluster 2026-03-10T13:43:22.203330+0000 mgr.a (mgr.14150) 51 : cluster [DBG] pgmap v20: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T13:43:23.920 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:43:23 vm00 bash[20748]: cephadm 2026-03-10T13:43:22.515115+0000 mgr.a (mgr.14150) 52 : cephadm [INF] Updating vm00:/etc/ceph/ceph.conf 2026-03-10T13:43:23.921 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:43:23 vm00 bash[20748]: cephadm 2026-03-10T13:43:22.515115+0000 mgr.a (mgr.14150) 52 : cephadm [INF] Updating vm00:/etc/ceph/ceph.conf 2026-03-10T13:43:23.921 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:43:23 vm00 bash[20748]: cephadm 2026-03-10T13:43:22.515224+0000 mgr.a (mgr.14150) 53 : cephadm [INF] Updating vm07:/etc/ceph/ceph.conf 2026-03-10T13:43:23.921 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:43:23 vm00 bash[20748]: cephadm 2026-03-10T13:43:22.515224+0000 mgr.a (mgr.14150) 53 : cephadm [INF] Updating vm07:/etc/ceph/ceph.conf 2026-03-10T13:43:23.921 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:43:23 vm00 bash[20748]: cephadm 2026-03-10T13:43:22.515281+0000 mgr.a (mgr.14150) 54 : cephadm [INF] Updating vm08:/etc/ceph/ceph.conf 2026-03-10T13:43:23.921 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:43:23 vm00 bash[20748]: cephadm 2026-03-10T13:43:22.515281+0000 mgr.a (mgr.14150) 54 : cephadm [INF] Updating vm08:/etc/ceph/ceph.conf 2026-03-10T13:43:23.921 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:43:23 vm00 bash[20748]: cephadm 2026-03-10T13:43:22.567149+0000 mgr.a (mgr.14150) 55 : cephadm [INF] Updating vm08:/var/lib/ceph/c9620084-1c86-11f1-bcc5-e3fb709eab0a/config/ceph.conf 2026-03-10T13:43:23.921 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:43:23 vm00 bash[20748]: cephadm 2026-03-10T13:43:22.567149+0000 mgr.a (mgr.14150) 55 : cephadm [INF] Updating vm08:/var/lib/ceph/c9620084-1c86-11f1-bcc5-e3fb709eab0a/config/ceph.conf 2026-03-10T13:43:23.921 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:43:23 vm00 bash[20748]: cephadm 2026-03-10T13:43:22.568736+0000 mgr.a (mgr.14150) 56 : cephadm [INF] Updating vm07:/var/lib/ceph/c9620084-1c86-11f1-bcc5-e3fb709eab0a/config/ceph.conf 2026-03-10T13:43:23.921 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:43:23 vm00 bash[20748]: cephadm 2026-03-10T13:43:22.568736+0000 mgr.a (mgr.14150) 56 : cephadm [INF] Updating vm07:/var/lib/ceph/c9620084-1c86-11f1-bcc5-e3fb709eab0a/config/ceph.conf 2026-03-10T13:43:23.921 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:43:23 vm00 bash[20748]: cephadm 2026-03-10T13:43:22.569059+0000 mgr.a (mgr.14150) 57 : cephadm [INF] Updating vm00:/var/lib/ceph/c9620084-1c86-11f1-bcc5-e3fb709eab0a/config/ceph.conf 2026-03-10T13:43:23.921 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:43:23 vm00 bash[20748]: cephadm 2026-03-10T13:43:22.569059+0000 mgr.a (mgr.14150) 57 : cephadm [INF] Updating vm00:/var/lib/ceph/c9620084-1c86-11f1-bcc5-e3fb709eab0a/config/ceph.conf 2026-03-10T13:43:23.921 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:43:23 vm00 bash[20748]: audit 2026-03-10T13:43:22.612849+0000 mon.a (mon.0) 221 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:43:23.921 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:43:23 vm00 bash[20748]: audit 2026-03-10T13:43:22.612849+0000 mon.a (mon.0) 221 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:43:23.921 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:43:23 vm00 bash[20748]: audit 2026-03-10T13:43:22.616228+0000 mon.a (mon.0) 222 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:43:23.921 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:43:23 vm00 bash[20748]: audit 2026-03-10T13:43:22.616228+0000 mon.a (mon.0) 222 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:43:23.921 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:43:23 vm00 bash[20748]: audit 2026-03-10T13:43:22.620613+0000 mon.a (mon.0) 223 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:43:23.921 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:43:23 vm00 bash[20748]: audit 2026-03-10T13:43:22.620613+0000 mon.a (mon.0) 223 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:43:23.921 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:43:23 vm00 bash[20748]: audit 2026-03-10T13:43:22.627383+0000 mon.a (mon.0) 224 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:43:23.921 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:43:23 vm00 bash[20748]: audit 2026-03-10T13:43:22.627383+0000 mon.a (mon.0) 224 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:43:23.921 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:43:23 vm00 bash[20748]: audit 2026-03-10T13:43:22.632322+0000 mon.a (mon.0) 225 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:43:23.921 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:43:23 vm00 bash[20748]: audit 2026-03-10T13:43:22.632322+0000 mon.a (mon.0) 225 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:43:23.921 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:43:23 vm00 bash[20748]: audit 2026-03-10T13:43:22.635868+0000 mon.a (mon.0) 226 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:43:23.921 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:43:23 vm00 bash[20748]: audit 2026-03-10T13:43:22.635868+0000 mon.a (mon.0) 226 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:43:23.921 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:43:23 vm00 bash[20748]: audit 2026-03-10T13:43:22.639134+0000 mon.a (mon.0) 227 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:43:23.921 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:43:23 vm00 bash[20748]: audit 2026-03-10T13:43:22.639134+0000 mon.a (mon.0) 227 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:43:23.921 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:43:23 vm00 bash[20748]: audit 2026-03-10T13:43:22.657247+0000 mon.a (mon.0) 228 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:43:23.921 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:43:23 vm00 bash[20748]: audit 2026-03-10T13:43:22.657247+0000 mon.a (mon.0) 228 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:43:23.921 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:43:23 vm00 bash[20748]: audit 2026-03-10T13:43:22.660316+0000 mon.a (mon.0) 229 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:43:23.921 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:43:23 vm00 bash[20748]: audit 2026-03-10T13:43:22.660316+0000 mon.a (mon.0) 229 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:43:23.921 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:43:23 vm00 bash[20748]: audit 2026-03-10T13:43:22.663139+0000 mon.a (mon.0) 230 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:43:23.921 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:43:23 vm00 bash[20748]: audit 2026-03-10T13:43:22.663139+0000 mon.a (mon.0) 230 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:43:23.921 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:43:23 vm00 bash[20748]: audit 2026-03-10T13:43:22.666082+0000 mon.a (mon.0) 231 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:43:23.921 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:43:23 vm00 bash[20748]: audit 2026-03-10T13:43:22.666082+0000 mon.a (mon.0) 231 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:43:23.921 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:43:23 vm00 bash[20748]: cephadm 2026-03-10T13:43:22.666524+0000 mgr.a (mgr.14150) 58 : cephadm [INF] Reconfiguring mon.a (unknown last config time)... 2026-03-10T13:43:23.921 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:43:23 vm00 bash[20748]: cephadm 2026-03-10T13:43:22.666524+0000 mgr.a (mgr.14150) 58 : cephadm [INF] Reconfiguring mon.a (unknown last config time)... 2026-03-10T13:43:23.921 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:43:23 vm00 bash[20748]: audit 2026-03-10T13:43:22.666711+0000 mon.a (mon.0) 232 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-10T13:43:23.921 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:43:23 vm00 bash[20748]: audit 2026-03-10T13:43:22.666711+0000 mon.a (mon.0) 232 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-10T13:43:23.921 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:43:23 vm00 bash[20748]: audit 2026-03-10T13:43:22.667162+0000 mon.a (mon.0) 233 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch 2026-03-10T13:43:23.921 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:43:23 vm00 bash[20748]: audit 2026-03-10T13:43:22.667162+0000 mon.a (mon.0) 233 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch 2026-03-10T13:43:23.921 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:43:23 vm00 bash[20748]: audit 2026-03-10T13:43:22.667559+0000 mon.a (mon.0) 234 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T13:43:23.921 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:43:23 vm00 bash[20748]: audit 2026-03-10T13:43:22.667559+0000 mon.a (mon.0) 234 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T13:43:23.921 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:43:23 vm00 bash[20748]: cephadm 2026-03-10T13:43:22.668032+0000 mgr.a (mgr.14150) 59 : cephadm [INF] Reconfiguring daemon mon.a on vm00 2026-03-10T13:43:23.921 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:43:23 vm00 bash[20748]: cephadm 2026-03-10T13:43:22.668032+0000 mgr.a (mgr.14150) 59 : cephadm [INF] Reconfiguring daemon mon.a on vm00 2026-03-10T13:43:23.921 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:43:23 vm00 bash[20748]: audit 2026-03-10T13:43:22.752837+0000 mon.a (mon.0) 235 : audit [DBG] from='client.? 192.168.123.108:0/359179018' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch 2026-03-10T13:43:23.921 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:43:23 vm00 bash[20748]: audit 2026-03-10T13:43:22.752837+0000 mon.a (mon.0) 235 : audit [DBG] from='client.? 192.168.123.108:0/359179018' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch 2026-03-10T13:43:23.921 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:43:23 vm00 bash[20748]: audit 2026-03-10T13:43:23.061320+0000 mon.a (mon.0) 236 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:43:23.921 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:43:23 vm00 bash[20748]: audit 2026-03-10T13:43:23.061320+0000 mon.a (mon.0) 236 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:43:23.921 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:43:23 vm00 bash[20748]: audit 2026-03-10T13:43:23.065926+0000 mon.a (mon.0) 237 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:43:23.921 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:43:23 vm00 bash[20748]: audit 2026-03-10T13:43:23.065926+0000 mon.a (mon.0) 237 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:43:23.921 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:43:23 vm00 bash[20748]: audit 2026-03-10T13:43:23.066720+0000 mon.a (mon.0) 238 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-10T13:43:23.922 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:43:23 vm00 bash[20748]: audit 2026-03-10T13:43:23.066720+0000 mon.a (mon.0) 238 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-10T13:43:23.922 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:43:23 vm00 bash[20748]: audit 2026-03-10T13:43:23.067146+0000 mon.a (mon.0) 239 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch 2026-03-10T13:43:23.922 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:43:23 vm00 bash[20748]: audit 2026-03-10T13:43:23.067146+0000 mon.a (mon.0) 239 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch 2026-03-10T13:43:23.922 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:43:23 vm00 bash[20748]: audit 2026-03-10T13:43:23.067547+0000 mon.a (mon.0) 240 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T13:43:23.922 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:43:23 vm00 bash[20748]: audit 2026-03-10T13:43:23.067547+0000 mon.a (mon.0) 240 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T13:43:23.922 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:43:23 vm00 bash[20748]: audit 2026-03-10T13:43:23.449384+0000 mon.a (mon.0) 241 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:43:23.922 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:43:23 vm00 bash[20748]: audit 2026-03-10T13:43:23.449384+0000 mon.a (mon.0) 241 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:43:23.922 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:43:23 vm00 bash[20748]: audit 2026-03-10T13:43:23.453093+0000 mon.a (mon.0) 242 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:43:23.922 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:43:23 vm00 bash[20748]: audit 2026-03-10T13:43:23.453093+0000 mon.a (mon.0) 242 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:43:23.922 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:43:23 vm00 bash[20748]: audit 2026-03-10T13:43:23.453969+0000 mon.a (mon.0) 243 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-10T13:43:23.922 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:43:23 vm00 bash[20748]: audit 2026-03-10T13:43:23.453969+0000 mon.a (mon.0) 243 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-10T13:43:23.922 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:43:23 vm00 bash[20748]: audit 2026-03-10T13:43:23.454376+0000 mon.a (mon.0) 244 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch 2026-03-10T13:43:23.922 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:43:23 vm00 bash[20748]: audit 2026-03-10T13:43:23.454376+0000 mon.a (mon.0) 244 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch 2026-03-10T13:43:23.922 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:43:23 vm00 bash[20748]: audit 2026-03-10T13:43:23.454750+0000 mon.a (mon.0) 245 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T13:43:23.922 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:43:23 vm00 bash[20748]: audit 2026-03-10T13:43:23.454750+0000 mon.a (mon.0) 245 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T13:43:23.922 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:43:23 vm00 bash[20748]: audit 2026-03-10T13:43:23.481249+0000 mon.a (mon.0) 246 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T13:43:23.922 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:43:23 vm00 bash[20748]: audit 2026-03-10T13:43:23.481249+0000 mon.a (mon.0) 246 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T13:43:23.999 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:43:23 vm07 bash[23044]: cluster 2026-03-10T13:43:22.203330+0000 mgr.a (mgr.14150) 51 : cluster [DBG] pgmap v20: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T13:43:23.999 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:43:23 vm07 bash[23044]: cluster 2026-03-10T13:43:22.203330+0000 mgr.a (mgr.14150) 51 : cluster [DBG] pgmap v20: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T13:43:23.999 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:43:23 vm07 bash[23044]: cephadm 2026-03-10T13:43:22.515115+0000 mgr.a (mgr.14150) 52 : cephadm [INF] Updating vm00:/etc/ceph/ceph.conf 2026-03-10T13:43:23.999 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:43:23 vm07 bash[23044]: cephadm 2026-03-10T13:43:22.515115+0000 mgr.a (mgr.14150) 52 : cephadm [INF] Updating vm00:/etc/ceph/ceph.conf 2026-03-10T13:43:23.999 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:43:23 vm07 bash[23044]: cephadm 2026-03-10T13:43:22.515224+0000 mgr.a (mgr.14150) 53 : cephadm [INF] Updating vm07:/etc/ceph/ceph.conf 2026-03-10T13:43:24.000 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:43:23 vm07 bash[23044]: cephadm 2026-03-10T13:43:22.515224+0000 mgr.a (mgr.14150) 53 : cephadm [INF] Updating vm07:/etc/ceph/ceph.conf 2026-03-10T13:43:24.000 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:43:23 vm07 bash[23044]: cephadm 2026-03-10T13:43:22.515281+0000 mgr.a (mgr.14150) 54 : cephadm [INF] Updating vm08:/etc/ceph/ceph.conf 2026-03-10T13:43:24.000 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:43:23 vm07 bash[23044]: cephadm 2026-03-10T13:43:22.515281+0000 mgr.a (mgr.14150) 54 : cephadm [INF] Updating vm08:/etc/ceph/ceph.conf 2026-03-10T13:43:24.000 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:43:23 vm07 bash[23044]: cephadm 2026-03-10T13:43:22.567149+0000 mgr.a (mgr.14150) 55 : cephadm [INF] Updating vm08:/var/lib/ceph/c9620084-1c86-11f1-bcc5-e3fb709eab0a/config/ceph.conf 2026-03-10T13:43:24.000 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:43:23 vm07 bash[23044]: cephadm 2026-03-10T13:43:22.567149+0000 mgr.a (mgr.14150) 55 : cephadm [INF] Updating vm08:/var/lib/ceph/c9620084-1c86-11f1-bcc5-e3fb709eab0a/config/ceph.conf 2026-03-10T13:43:24.000 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:43:23 vm07 bash[23044]: cephadm 2026-03-10T13:43:22.568736+0000 mgr.a (mgr.14150) 56 : cephadm [INF] Updating vm07:/var/lib/ceph/c9620084-1c86-11f1-bcc5-e3fb709eab0a/config/ceph.conf 2026-03-10T13:43:24.000 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:43:23 vm07 bash[23044]: cephadm 2026-03-10T13:43:22.568736+0000 mgr.a (mgr.14150) 56 : cephadm [INF] Updating vm07:/var/lib/ceph/c9620084-1c86-11f1-bcc5-e3fb709eab0a/config/ceph.conf 2026-03-10T13:43:24.000 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:43:23 vm07 bash[23044]: cephadm 2026-03-10T13:43:22.569059+0000 mgr.a (mgr.14150) 57 : cephadm [INF] Updating vm00:/var/lib/ceph/c9620084-1c86-11f1-bcc5-e3fb709eab0a/config/ceph.conf 2026-03-10T13:43:24.000 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:43:23 vm07 bash[23044]: cephadm 2026-03-10T13:43:22.569059+0000 mgr.a (mgr.14150) 57 : cephadm [INF] Updating vm00:/var/lib/ceph/c9620084-1c86-11f1-bcc5-e3fb709eab0a/config/ceph.conf 2026-03-10T13:43:24.000 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:43:23 vm07 bash[23044]: audit 2026-03-10T13:43:22.612849+0000 mon.a (mon.0) 221 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:43:24.000 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:43:23 vm07 bash[23044]: audit 2026-03-10T13:43:22.612849+0000 mon.a (mon.0) 221 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:43:24.000 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:43:23 vm07 bash[23044]: audit 2026-03-10T13:43:22.616228+0000 mon.a (mon.0) 222 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:43:24.000 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:43:23 vm07 bash[23044]: audit 2026-03-10T13:43:22.616228+0000 mon.a (mon.0) 222 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:43:24.000 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:43:23 vm07 bash[23044]: audit 2026-03-10T13:43:22.620613+0000 mon.a (mon.0) 223 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:43:24.000 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:43:23 vm07 bash[23044]: audit 2026-03-10T13:43:22.620613+0000 mon.a (mon.0) 223 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:43:24.000 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:43:23 vm07 bash[23044]: audit 2026-03-10T13:43:22.627383+0000 mon.a (mon.0) 224 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:43:24.000 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:43:23 vm07 bash[23044]: audit 2026-03-10T13:43:22.627383+0000 mon.a (mon.0) 224 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:43:24.000 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:43:23 vm07 bash[23044]: audit 2026-03-10T13:43:22.632322+0000 mon.a (mon.0) 225 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:43:24.000 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:43:23 vm07 bash[23044]: audit 2026-03-10T13:43:22.632322+0000 mon.a (mon.0) 225 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:43:24.000 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:43:23 vm07 bash[23044]: audit 2026-03-10T13:43:22.635868+0000 mon.a (mon.0) 226 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:43:24.000 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:43:23 vm07 bash[23044]: audit 2026-03-10T13:43:22.635868+0000 mon.a (mon.0) 226 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:43:24.000 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:43:23 vm07 bash[23044]: audit 2026-03-10T13:43:22.639134+0000 mon.a (mon.0) 227 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:43:24.000 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:43:23 vm07 bash[23044]: audit 2026-03-10T13:43:22.639134+0000 mon.a (mon.0) 227 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:43:24.000 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:43:23 vm07 bash[23044]: audit 2026-03-10T13:43:22.657247+0000 mon.a (mon.0) 228 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:43:24.000 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:43:23 vm07 bash[23044]: audit 2026-03-10T13:43:22.657247+0000 mon.a (mon.0) 228 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:43:24.000 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:43:23 vm07 bash[23044]: audit 2026-03-10T13:43:22.660316+0000 mon.a (mon.0) 229 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:43:24.000 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:43:23 vm07 bash[23044]: audit 2026-03-10T13:43:22.660316+0000 mon.a (mon.0) 229 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:43:24.000 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:43:23 vm07 bash[23044]: audit 2026-03-10T13:43:22.663139+0000 mon.a (mon.0) 230 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:43:24.000 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:43:23 vm07 bash[23044]: audit 2026-03-10T13:43:22.663139+0000 mon.a (mon.0) 230 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:43:24.000 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:43:23 vm07 bash[23044]: audit 2026-03-10T13:43:22.666082+0000 mon.a (mon.0) 231 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:43:24.000 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:43:23 vm07 bash[23044]: audit 2026-03-10T13:43:22.666082+0000 mon.a (mon.0) 231 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:43:24.000 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:43:23 vm07 bash[23044]: cephadm 2026-03-10T13:43:22.666524+0000 mgr.a (mgr.14150) 58 : cephadm [INF] Reconfiguring mon.a (unknown last config time)... 2026-03-10T13:43:24.000 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:43:23 vm07 bash[23044]: cephadm 2026-03-10T13:43:22.666524+0000 mgr.a (mgr.14150) 58 : cephadm [INF] Reconfiguring mon.a (unknown last config time)... 2026-03-10T13:43:24.000 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:43:23 vm07 bash[23044]: audit 2026-03-10T13:43:22.666711+0000 mon.a (mon.0) 232 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-10T13:43:24.000 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:43:23 vm07 bash[23044]: audit 2026-03-10T13:43:22.666711+0000 mon.a (mon.0) 232 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-10T13:43:24.000 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:43:23 vm07 bash[23044]: audit 2026-03-10T13:43:22.667162+0000 mon.a (mon.0) 233 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch 2026-03-10T13:43:24.000 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:43:23 vm07 bash[23044]: audit 2026-03-10T13:43:22.667162+0000 mon.a (mon.0) 233 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch 2026-03-10T13:43:24.000 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:43:23 vm07 bash[23044]: audit 2026-03-10T13:43:22.667559+0000 mon.a (mon.0) 234 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T13:43:24.000 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:43:23 vm07 bash[23044]: audit 2026-03-10T13:43:22.667559+0000 mon.a (mon.0) 234 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T13:43:24.000 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:43:23 vm07 bash[23044]: cephadm 2026-03-10T13:43:22.668032+0000 mgr.a (mgr.14150) 59 : cephadm [INF] Reconfiguring daemon mon.a on vm00 2026-03-10T13:43:24.000 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:43:23 vm07 bash[23044]: cephadm 2026-03-10T13:43:22.668032+0000 mgr.a (mgr.14150) 59 : cephadm [INF] Reconfiguring daemon mon.a on vm00 2026-03-10T13:43:24.000 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:43:23 vm07 bash[23044]: audit 2026-03-10T13:43:22.752837+0000 mon.a (mon.0) 235 : audit [DBG] from='client.? 192.168.123.108:0/359179018' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch 2026-03-10T13:43:24.000 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:43:23 vm07 bash[23044]: audit 2026-03-10T13:43:22.752837+0000 mon.a (mon.0) 235 : audit [DBG] from='client.? 192.168.123.108:0/359179018' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch 2026-03-10T13:43:24.000 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:43:23 vm07 bash[23044]: audit 2026-03-10T13:43:23.061320+0000 mon.a (mon.0) 236 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:43:24.000 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:43:23 vm07 bash[23044]: audit 2026-03-10T13:43:23.061320+0000 mon.a (mon.0) 236 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:43:24.000 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:43:23 vm07 bash[23044]: audit 2026-03-10T13:43:23.065926+0000 mon.a (mon.0) 237 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:43:24.000 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:43:23 vm07 bash[23044]: audit 2026-03-10T13:43:23.065926+0000 mon.a (mon.0) 237 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:43:24.000 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:43:23 vm07 bash[23044]: audit 2026-03-10T13:43:23.066720+0000 mon.a (mon.0) 238 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-10T13:43:24.000 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:43:23 vm07 bash[23044]: audit 2026-03-10T13:43:23.066720+0000 mon.a (mon.0) 238 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-10T13:43:24.000 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:43:23 vm07 bash[23044]: audit 2026-03-10T13:43:23.067146+0000 mon.a (mon.0) 239 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch 2026-03-10T13:43:24.000 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:43:23 vm07 bash[23044]: audit 2026-03-10T13:43:23.067146+0000 mon.a (mon.0) 239 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch 2026-03-10T13:43:24.000 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:43:23 vm07 bash[23044]: audit 2026-03-10T13:43:23.067547+0000 mon.a (mon.0) 240 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T13:43:24.000 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:43:23 vm07 bash[23044]: audit 2026-03-10T13:43:23.067547+0000 mon.a (mon.0) 240 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T13:43:24.000 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:43:23 vm07 bash[23044]: audit 2026-03-10T13:43:23.449384+0000 mon.a (mon.0) 241 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:43:24.000 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:43:23 vm07 bash[23044]: audit 2026-03-10T13:43:23.449384+0000 mon.a (mon.0) 241 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:43:24.000 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:43:23 vm07 bash[23044]: audit 2026-03-10T13:43:23.453093+0000 mon.a (mon.0) 242 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:43:24.000 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:43:23 vm07 bash[23044]: audit 2026-03-10T13:43:23.453093+0000 mon.a (mon.0) 242 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:43:24.000 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:43:23 vm07 bash[23044]: audit 2026-03-10T13:43:23.453969+0000 mon.a (mon.0) 243 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-10T13:43:24.000 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:43:23 vm07 bash[23044]: audit 2026-03-10T13:43:23.453969+0000 mon.a (mon.0) 243 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-10T13:43:24.000 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:43:23 vm07 bash[23044]: audit 2026-03-10T13:43:23.454376+0000 mon.a (mon.0) 244 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch 2026-03-10T13:43:24.000 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:43:23 vm07 bash[23044]: audit 2026-03-10T13:43:23.454376+0000 mon.a (mon.0) 244 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch 2026-03-10T13:43:24.000 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:43:23 vm07 bash[23044]: audit 2026-03-10T13:43:23.454750+0000 mon.a (mon.0) 245 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T13:43:24.001 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:43:23 vm07 bash[23044]: audit 2026-03-10T13:43:23.454750+0000 mon.a (mon.0) 245 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T13:43:24.001 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:43:23 vm07 bash[23044]: audit 2026-03-10T13:43:23.481249+0000 mon.a (mon.0) 246 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T13:43:24.001 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:43:23 vm07 bash[23044]: audit 2026-03-10T13:43:23.481249+0000 mon.a (mon.0) 246 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T13:43:24.088 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:23 vm08 bash[23387]: cluster 2026-03-10T13:43:22.203330+0000 mgr.a (mgr.14150) 51 : cluster [DBG] pgmap v20: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T13:43:24.088 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:23 vm08 bash[23387]: cluster 2026-03-10T13:43:22.203330+0000 mgr.a (mgr.14150) 51 : cluster [DBG] pgmap v20: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T13:43:24.088 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:23 vm08 bash[23387]: cephadm 2026-03-10T13:43:22.515115+0000 mgr.a (mgr.14150) 52 : cephadm [INF] Updating vm00:/etc/ceph/ceph.conf 2026-03-10T13:43:24.088 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:23 vm08 bash[23387]: cephadm 2026-03-10T13:43:22.515115+0000 mgr.a (mgr.14150) 52 : cephadm [INF] Updating vm00:/etc/ceph/ceph.conf 2026-03-10T13:43:24.088 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:23 vm08 bash[23387]: cephadm 2026-03-10T13:43:22.515224+0000 mgr.a (mgr.14150) 53 : cephadm [INF] Updating vm07:/etc/ceph/ceph.conf 2026-03-10T13:43:24.088 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:23 vm08 bash[23387]: cephadm 2026-03-10T13:43:22.515224+0000 mgr.a (mgr.14150) 53 : cephadm [INF] Updating vm07:/etc/ceph/ceph.conf 2026-03-10T13:43:24.088 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:23 vm08 bash[23387]: cephadm 2026-03-10T13:43:22.515281+0000 mgr.a (mgr.14150) 54 : cephadm [INF] Updating vm08:/etc/ceph/ceph.conf 2026-03-10T13:43:24.088 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:23 vm08 bash[23387]: cephadm 2026-03-10T13:43:22.515281+0000 mgr.a (mgr.14150) 54 : cephadm [INF] Updating vm08:/etc/ceph/ceph.conf 2026-03-10T13:43:24.088 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:23 vm08 bash[23387]: cephadm 2026-03-10T13:43:22.567149+0000 mgr.a (mgr.14150) 55 : cephadm [INF] Updating vm08:/var/lib/ceph/c9620084-1c86-11f1-bcc5-e3fb709eab0a/config/ceph.conf 2026-03-10T13:43:24.088 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:23 vm08 bash[23387]: cephadm 2026-03-10T13:43:22.567149+0000 mgr.a (mgr.14150) 55 : cephadm [INF] Updating vm08:/var/lib/ceph/c9620084-1c86-11f1-bcc5-e3fb709eab0a/config/ceph.conf 2026-03-10T13:43:24.088 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:23 vm08 bash[23387]: cephadm 2026-03-10T13:43:22.568736+0000 mgr.a (mgr.14150) 56 : cephadm [INF] Updating vm07:/var/lib/ceph/c9620084-1c86-11f1-bcc5-e3fb709eab0a/config/ceph.conf 2026-03-10T13:43:24.088 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:23 vm08 bash[23387]: cephadm 2026-03-10T13:43:22.568736+0000 mgr.a (mgr.14150) 56 : cephadm [INF] Updating vm07:/var/lib/ceph/c9620084-1c86-11f1-bcc5-e3fb709eab0a/config/ceph.conf 2026-03-10T13:43:24.088 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:23 vm08 bash[23387]: cephadm 2026-03-10T13:43:22.569059+0000 mgr.a (mgr.14150) 57 : cephadm [INF] Updating vm00:/var/lib/ceph/c9620084-1c86-11f1-bcc5-e3fb709eab0a/config/ceph.conf 2026-03-10T13:43:24.088 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:23 vm08 bash[23387]: cephadm 2026-03-10T13:43:22.569059+0000 mgr.a (mgr.14150) 57 : cephadm [INF] Updating vm00:/var/lib/ceph/c9620084-1c86-11f1-bcc5-e3fb709eab0a/config/ceph.conf 2026-03-10T13:43:24.088 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:23 vm08 bash[23387]: audit 2026-03-10T13:43:22.612849+0000 mon.a (mon.0) 221 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:43:24.088 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:23 vm08 bash[23387]: audit 2026-03-10T13:43:22.612849+0000 mon.a (mon.0) 221 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:43:24.088 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:23 vm08 bash[23387]: audit 2026-03-10T13:43:22.616228+0000 mon.a (mon.0) 222 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:43:24.088 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:23 vm08 bash[23387]: audit 2026-03-10T13:43:22.616228+0000 mon.a (mon.0) 222 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:43:24.088 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:23 vm08 bash[23387]: audit 2026-03-10T13:43:22.620613+0000 mon.a (mon.0) 223 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:43:24.088 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:23 vm08 bash[23387]: audit 2026-03-10T13:43:22.620613+0000 mon.a (mon.0) 223 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:43:24.088 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:23 vm08 bash[23387]: audit 2026-03-10T13:43:22.627383+0000 mon.a (mon.0) 224 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:43:24.088 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:23 vm08 bash[23387]: audit 2026-03-10T13:43:22.627383+0000 mon.a (mon.0) 224 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:43:24.088 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:23 vm08 bash[23387]: audit 2026-03-10T13:43:22.632322+0000 mon.a (mon.0) 225 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:43:24.088 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:23 vm08 bash[23387]: audit 2026-03-10T13:43:22.632322+0000 mon.a (mon.0) 225 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:43:24.089 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:23 vm08 bash[23387]: audit 2026-03-10T13:43:22.635868+0000 mon.a (mon.0) 226 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:43:24.089 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:23 vm08 bash[23387]: audit 2026-03-10T13:43:22.635868+0000 mon.a (mon.0) 226 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:43:24.089 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:23 vm08 bash[23387]: audit 2026-03-10T13:43:22.639134+0000 mon.a (mon.0) 227 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:43:24.089 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:23 vm08 bash[23387]: audit 2026-03-10T13:43:22.639134+0000 mon.a (mon.0) 227 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:43:24.089 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:23 vm08 bash[23387]: audit 2026-03-10T13:43:22.657247+0000 mon.a (mon.0) 228 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:43:24.089 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:23 vm08 bash[23387]: audit 2026-03-10T13:43:22.657247+0000 mon.a (mon.0) 228 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:43:24.089 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:23 vm08 bash[23387]: audit 2026-03-10T13:43:22.660316+0000 mon.a (mon.0) 229 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:43:24.089 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:23 vm08 bash[23387]: audit 2026-03-10T13:43:22.660316+0000 mon.a (mon.0) 229 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:43:24.089 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:23 vm08 bash[23387]: audit 2026-03-10T13:43:22.663139+0000 mon.a (mon.0) 230 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:43:24.089 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:23 vm08 bash[23387]: audit 2026-03-10T13:43:22.663139+0000 mon.a (mon.0) 230 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:43:24.089 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:23 vm08 bash[23387]: audit 2026-03-10T13:43:22.666082+0000 mon.a (mon.0) 231 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:43:24.089 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:23 vm08 bash[23387]: audit 2026-03-10T13:43:22.666082+0000 mon.a (mon.0) 231 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:43:24.089 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:23 vm08 bash[23387]: cephadm 2026-03-10T13:43:22.666524+0000 mgr.a (mgr.14150) 58 : cephadm [INF] Reconfiguring mon.a (unknown last config time)... 2026-03-10T13:43:24.089 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:23 vm08 bash[23387]: cephadm 2026-03-10T13:43:22.666524+0000 mgr.a (mgr.14150) 58 : cephadm [INF] Reconfiguring mon.a (unknown last config time)... 2026-03-10T13:43:24.089 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:23 vm08 bash[23387]: audit 2026-03-10T13:43:22.666711+0000 mon.a (mon.0) 232 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-10T13:43:24.089 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:23 vm08 bash[23387]: audit 2026-03-10T13:43:22.666711+0000 mon.a (mon.0) 232 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-10T13:43:24.089 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:23 vm08 bash[23387]: audit 2026-03-10T13:43:22.667162+0000 mon.a (mon.0) 233 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch 2026-03-10T13:43:24.089 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:23 vm08 bash[23387]: audit 2026-03-10T13:43:22.667162+0000 mon.a (mon.0) 233 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch 2026-03-10T13:43:24.089 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:23 vm08 bash[23387]: audit 2026-03-10T13:43:22.667559+0000 mon.a (mon.0) 234 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T13:43:24.089 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:23 vm08 bash[23387]: audit 2026-03-10T13:43:22.667559+0000 mon.a (mon.0) 234 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T13:43:24.089 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:23 vm08 bash[23387]: cephadm 2026-03-10T13:43:22.668032+0000 mgr.a (mgr.14150) 59 : cephadm [INF] Reconfiguring daemon mon.a on vm00 2026-03-10T13:43:24.089 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:23 vm08 bash[23387]: cephadm 2026-03-10T13:43:22.668032+0000 mgr.a (mgr.14150) 59 : cephadm [INF] Reconfiguring daemon mon.a on vm00 2026-03-10T13:43:24.089 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:23 vm08 bash[23387]: audit 2026-03-10T13:43:22.752837+0000 mon.a (mon.0) 235 : audit [DBG] from='client.? 192.168.123.108:0/359179018' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch 2026-03-10T13:43:24.089 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:23 vm08 bash[23387]: audit 2026-03-10T13:43:22.752837+0000 mon.a (mon.0) 235 : audit [DBG] from='client.? 192.168.123.108:0/359179018' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch 2026-03-10T13:43:24.089 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:23 vm08 bash[23387]: audit 2026-03-10T13:43:23.061320+0000 mon.a (mon.0) 236 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:43:24.089 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:23 vm08 bash[23387]: audit 2026-03-10T13:43:23.061320+0000 mon.a (mon.0) 236 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:43:24.089 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:23 vm08 bash[23387]: audit 2026-03-10T13:43:23.065926+0000 mon.a (mon.0) 237 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:43:24.089 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:23 vm08 bash[23387]: audit 2026-03-10T13:43:23.065926+0000 mon.a (mon.0) 237 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:43:24.089 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:23 vm08 bash[23387]: audit 2026-03-10T13:43:23.066720+0000 mon.a (mon.0) 238 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-10T13:43:24.089 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:23 vm08 bash[23387]: audit 2026-03-10T13:43:23.066720+0000 mon.a (mon.0) 238 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-10T13:43:24.089 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:23 vm08 bash[23387]: audit 2026-03-10T13:43:23.067146+0000 mon.a (mon.0) 239 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch 2026-03-10T13:43:24.089 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:23 vm08 bash[23387]: audit 2026-03-10T13:43:23.067146+0000 mon.a (mon.0) 239 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch 2026-03-10T13:43:24.089 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:23 vm08 bash[23387]: audit 2026-03-10T13:43:23.067547+0000 mon.a (mon.0) 240 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T13:43:24.089 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:23 vm08 bash[23387]: audit 2026-03-10T13:43:23.067547+0000 mon.a (mon.0) 240 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T13:43:24.089 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:23 vm08 bash[23387]: audit 2026-03-10T13:43:23.449384+0000 mon.a (mon.0) 241 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:43:24.089 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:23 vm08 bash[23387]: audit 2026-03-10T13:43:23.449384+0000 mon.a (mon.0) 241 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:43:24.089 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:23 vm08 bash[23387]: audit 2026-03-10T13:43:23.453093+0000 mon.a (mon.0) 242 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:43:24.089 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:23 vm08 bash[23387]: audit 2026-03-10T13:43:23.453093+0000 mon.a (mon.0) 242 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:43:24.089 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:23 vm08 bash[23387]: audit 2026-03-10T13:43:23.453969+0000 mon.a (mon.0) 243 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-10T13:43:24.089 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:23 vm08 bash[23387]: audit 2026-03-10T13:43:23.453969+0000 mon.a (mon.0) 243 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-10T13:43:24.089 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:23 vm08 bash[23387]: audit 2026-03-10T13:43:23.454376+0000 mon.a (mon.0) 244 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch 2026-03-10T13:43:24.089 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:23 vm08 bash[23387]: audit 2026-03-10T13:43:23.454376+0000 mon.a (mon.0) 244 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch 2026-03-10T13:43:24.089 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:23 vm08 bash[23387]: audit 2026-03-10T13:43:23.454750+0000 mon.a (mon.0) 245 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T13:43:24.089 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:23 vm08 bash[23387]: audit 2026-03-10T13:43:23.454750+0000 mon.a (mon.0) 245 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T13:43:24.089 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:23 vm08 bash[23387]: audit 2026-03-10T13:43:23.481249+0000 mon.a (mon.0) 246 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T13:43:24.089 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:23 vm08 bash[23387]: audit 2026-03-10T13:43:23.481249+0000 mon.a (mon.0) 246 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T13:43:24.828 INFO:journalctl@ceph.mgr.a.vm00.stdout:Mar 10 13:43:24 vm00 bash[21015]: debug 2026-03-10T13:43:24.476+0000 7f3d7e274640 -1 mgr.server handle_report got status from non-daemon mon.b 2026-03-10T13:43:25.087 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:24 vm08 bash[23387]: cephadm 2026-03-10T13:43:23.066511+0000 mgr.a (mgr.14150) 60 : cephadm [INF] Reconfiguring mon.b (monmap changed)... 2026-03-10T13:43:25.087 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:24 vm08 bash[23387]: cephadm 2026-03-10T13:43:23.066511+0000 mgr.a (mgr.14150) 60 : cephadm [INF] Reconfiguring mon.b (monmap changed)... 2026-03-10T13:43:25.087 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:24 vm08 bash[23387]: cephadm 2026-03-10T13:43:23.067968+0000 mgr.a (mgr.14150) 61 : cephadm [INF] Reconfiguring daemon mon.b on vm07 2026-03-10T13:43:25.087 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:24 vm08 bash[23387]: cephadm 2026-03-10T13:43:23.067968+0000 mgr.a (mgr.14150) 61 : cephadm [INF] Reconfiguring daemon mon.b on vm07 2026-03-10T13:43:25.087 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:24 vm08 bash[23387]: cephadm 2026-03-10T13:43:23.453706+0000 mgr.a (mgr.14150) 62 : cephadm [INF] Reconfiguring mon.c (monmap changed)... 2026-03-10T13:43:25.087 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:24 vm08 bash[23387]: cephadm 2026-03-10T13:43:23.453706+0000 mgr.a (mgr.14150) 62 : cephadm [INF] Reconfiguring mon.c (monmap changed)... 2026-03-10T13:43:25.088 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:24 vm08 bash[23387]: cephadm 2026-03-10T13:43:23.455227+0000 mgr.a (mgr.14150) 63 : cephadm [INF] Reconfiguring daemon mon.c on vm08 2026-03-10T13:43:25.088 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:24 vm08 bash[23387]: cephadm 2026-03-10T13:43:23.455227+0000 mgr.a (mgr.14150) 63 : cephadm [INF] Reconfiguring daemon mon.c on vm08 2026-03-10T13:43:25.088 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:24 vm08 bash[23387]: audit 2026-03-10T13:43:23.827629+0000 mon.a (mon.0) 247 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:43:25.088 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:24 vm08 bash[23387]: audit 2026-03-10T13:43:23.827629+0000 mon.a (mon.0) 247 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:43:25.088 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:24 vm08 bash[23387]: audit 2026-03-10T13:43:23.831353+0000 mon.a (mon.0) 248 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:43:25.088 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:24 vm08 bash[23387]: audit 2026-03-10T13:43:23.831353+0000 mon.a (mon.0) 248 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:43:25.088 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:24 vm08 bash[23387]: audit 2026-03-10T13:43:23.832142+0000 mon.a (mon.0) 249 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T13:43:25.088 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:24 vm08 bash[23387]: audit 2026-03-10T13:43:23.832142+0000 mon.a (mon.0) 249 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T13:43:25.088 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:24 vm08 bash[23387]: audit 2026-03-10T13:43:23.832976+0000 mon.a (mon.0) 250 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T13:43:25.088 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:24 vm08 bash[23387]: audit 2026-03-10T13:43:23.832976+0000 mon.a (mon.0) 250 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T13:43:25.088 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:24 vm08 bash[23387]: audit 2026-03-10T13:43:23.833323+0000 mon.a (mon.0) 251 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T13:43:25.088 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:24 vm08 bash[23387]: audit 2026-03-10T13:43:23.833323+0000 mon.a (mon.0) 251 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T13:43:25.088 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:24 vm08 bash[23387]: audit 2026-03-10T13:43:23.836500+0000 mon.a (mon.0) 252 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:43:25.088 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:24 vm08 bash[23387]: audit 2026-03-10T13:43:23.836500+0000 mon.a (mon.0) 252 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:43:25.216 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:43:24 vm00 bash[20748]: cephadm 2026-03-10T13:43:23.066511+0000 mgr.a (mgr.14150) 60 : cephadm [INF] Reconfiguring mon.b (monmap changed)... 2026-03-10T13:43:25.216 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:43:24 vm00 bash[20748]: cephadm 2026-03-10T13:43:23.066511+0000 mgr.a (mgr.14150) 60 : cephadm [INF] Reconfiguring mon.b (monmap changed)... 2026-03-10T13:43:25.216 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:43:24 vm00 bash[20748]: cephadm 2026-03-10T13:43:23.067968+0000 mgr.a (mgr.14150) 61 : cephadm [INF] Reconfiguring daemon mon.b on vm07 2026-03-10T13:43:25.216 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:43:24 vm00 bash[20748]: cephadm 2026-03-10T13:43:23.067968+0000 mgr.a (mgr.14150) 61 : cephadm [INF] Reconfiguring daemon mon.b on vm07 2026-03-10T13:43:25.216 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:43:24 vm00 bash[20748]: cephadm 2026-03-10T13:43:23.453706+0000 mgr.a (mgr.14150) 62 : cephadm [INF] Reconfiguring mon.c (monmap changed)... 2026-03-10T13:43:25.217 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:43:24 vm00 bash[20748]: cephadm 2026-03-10T13:43:23.453706+0000 mgr.a (mgr.14150) 62 : cephadm [INF] Reconfiguring mon.c (monmap changed)... 2026-03-10T13:43:25.217 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:43:24 vm00 bash[20748]: cephadm 2026-03-10T13:43:23.455227+0000 mgr.a (mgr.14150) 63 : cephadm [INF] Reconfiguring daemon mon.c on vm08 2026-03-10T13:43:25.217 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:43:24 vm00 bash[20748]: cephadm 2026-03-10T13:43:23.455227+0000 mgr.a (mgr.14150) 63 : cephadm [INF] Reconfiguring daemon mon.c on vm08 2026-03-10T13:43:25.217 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:43:24 vm00 bash[20748]: audit 2026-03-10T13:43:23.827629+0000 mon.a (mon.0) 247 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:43:25.217 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:43:24 vm00 bash[20748]: audit 2026-03-10T13:43:23.827629+0000 mon.a (mon.0) 247 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:43:25.217 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:43:24 vm00 bash[20748]: audit 2026-03-10T13:43:23.831353+0000 mon.a (mon.0) 248 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:43:25.217 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:43:24 vm00 bash[20748]: audit 2026-03-10T13:43:23.831353+0000 mon.a (mon.0) 248 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:43:25.217 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:43:24 vm00 bash[20748]: audit 2026-03-10T13:43:23.832142+0000 mon.a (mon.0) 249 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T13:43:25.217 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:43:24 vm00 bash[20748]: audit 2026-03-10T13:43:23.832142+0000 mon.a (mon.0) 249 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T13:43:25.217 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:43:24 vm00 bash[20748]: audit 2026-03-10T13:43:23.832976+0000 mon.a (mon.0) 250 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T13:43:25.217 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:43:24 vm00 bash[20748]: audit 2026-03-10T13:43:23.832976+0000 mon.a (mon.0) 250 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T13:43:25.217 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:43:24 vm00 bash[20748]: audit 2026-03-10T13:43:23.833323+0000 mon.a (mon.0) 251 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T13:43:25.217 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:43:24 vm00 bash[20748]: audit 2026-03-10T13:43:23.833323+0000 mon.a (mon.0) 251 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T13:43:25.217 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:43:24 vm00 bash[20748]: audit 2026-03-10T13:43:23.836500+0000 mon.a (mon.0) 252 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:43:25.217 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:43:24 vm00 bash[20748]: audit 2026-03-10T13:43:23.836500+0000 mon.a (mon.0) 252 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:43:25.249 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:43:24 vm07 bash[23044]: cephadm 2026-03-10T13:43:23.066511+0000 mgr.a (mgr.14150) 60 : cephadm [INF] Reconfiguring mon.b (monmap changed)... 2026-03-10T13:43:25.249 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:43:24 vm07 bash[23044]: cephadm 2026-03-10T13:43:23.066511+0000 mgr.a (mgr.14150) 60 : cephadm [INF] Reconfiguring mon.b (monmap changed)... 2026-03-10T13:43:25.249 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:43:24 vm07 bash[23044]: cephadm 2026-03-10T13:43:23.067968+0000 mgr.a (mgr.14150) 61 : cephadm [INF] Reconfiguring daemon mon.b on vm07 2026-03-10T13:43:25.249 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:43:24 vm07 bash[23044]: cephadm 2026-03-10T13:43:23.067968+0000 mgr.a (mgr.14150) 61 : cephadm [INF] Reconfiguring daemon mon.b on vm07 2026-03-10T13:43:25.249 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:43:24 vm07 bash[23044]: cephadm 2026-03-10T13:43:23.453706+0000 mgr.a (mgr.14150) 62 : cephadm [INF] Reconfiguring mon.c (monmap changed)... 2026-03-10T13:43:25.249 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:43:24 vm07 bash[23044]: cephadm 2026-03-10T13:43:23.453706+0000 mgr.a (mgr.14150) 62 : cephadm [INF] Reconfiguring mon.c (monmap changed)... 2026-03-10T13:43:25.249 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:43:24 vm07 bash[23044]: cephadm 2026-03-10T13:43:23.455227+0000 mgr.a (mgr.14150) 63 : cephadm [INF] Reconfiguring daemon mon.c on vm08 2026-03-10T13:43:25.249 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:43:24 vm07 bash[23044]: cephadm 2026-03-10T13:43:23.455227+0000 mgr.a (mgr.14150) 63 : cephadm [INF] Reconfiguring daemon mon.c on vm08 2026-03-10T13:43:25.249 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:43:24 vm07 bash[23044]: audit 2026-03-10T13:43:23.827629+0000 mon.a (mon.0) 247 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:43:25.249 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:43:24 vm07 bash[23044]: audit 2026-03-10T13:43:23.827629+0000 mon.a (mon.0) 247 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:43:25.249 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:43:24 vm07 bash[23044]: audit 2026-03-10T13:43:23.831353+0000 mon.a (mon.0) 248 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:43:25.249 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:43:24 vm07 bash[23044]: audit 2026-03-10T13:43:23.831353+0000 mon.a (mon.0) 248 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:43:25.249 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:43:24 vm07 bash[23044]: audit 2026-03-10T13:43:23.832142+0000 mon.a (mon.0) 249 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T13:43:25.249 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:43:24 vm07 bash[23044]: audit 2026-03-10T13:43:23.832142+0000 mon.a (mon.0) 249 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T13:43:25.249 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:43:24 vm07 bash[23044]: audit 2026-03-10T13:43:23.832976+0000 mon.a (mon.0) 250 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T13:43:25.249 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:43:24 vm07 bash[23044]: audit 2026-03-10T13:43:23.832976+0000 mon.a (mon.0) 250 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T13:43:25.249 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:43:24 vm07 bash[23044]: audit 2026-03-10T13:43:23.833323+0000 mon.a (mon.0) 251 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T13:43:25.249 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:43:24 vm07 bash[23044]: audit 2026-03-10T13:43:23.833323+0000 mon.a (mon.0) 251 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T13:43:25.249 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:43:24 vm07 bash[23044]: audit 2026-03-10T13:43:23.836500+0000 mon.a (mon.0) 252 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:43:25.250 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:43:24 vm07 bash[23044]: audit 2026-03-10T13:43:23.836500+0000 mon.a (mon.0) 252 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:43:26.216 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:43:25 vm00 bash[20748]: cluster 2026-03-10T13:43:24.203496+0000 mgr.a (mgr.14150) 64 : cluster [DBG] pgmap v21: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T13:43:26.216 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:43:25 vm00 bash[20748]: cluster 2026-03-10T13:43:24.203496+0000 mgr.a (mgr.14150) 64 : cluster [DBG] pgmap v21: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T13:43:26.249 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:43:25 vm07 bash[23044]: cluster 2026-03-10T13:43:24.203496+0000 mgr.a (mgr.14150) 64 : cluster [DBG] pgmap v21: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T13:43:26.249 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:43:25 vm07 bash[23044]: cluster 2026-03-10T13:43:24.203496+0000 mgr.a (mgr.14150) 64 : cluster [DBG] pgmap v21: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T13:43:26.337 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:25 vm08 bash[23387]: cluster 2026-03-10T13:43:24.203496+0000 mgr.a (mgr.14150) 64 : cluster [DBG] pgmap v21: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T13:43:26.337 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:25 vm08 bash[23387]: cluster 2026-03-10T13:43:24.203496+0000 mgr.a (mgr.14150) 64 : cluster [DBG] pgmap v21: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T13:43:27.408 INFO:teuthology.orchestra.run.vm00.stderr:Inferring config /var/lib/ceph/c9620084-1c86-11f1-bcc5-e3fb709eab0a/mon.a/config 2026-03-10T13:43:27.647 INFO:teuthology.orchestra.run.vm00.stdout:# minimal ceph.conf for c9620084-1c86-11f1-bcc5-e3fb709eab0a 2026-03-10T13:43:27.647 INFO:teuthology.orchestra.run.vm00.stdout:[global] 2026-03-10T13:43:27.647 INFO:teuthology.orchestra.run.vm00.stdout: fsid = c9620084-1c86-11f1-bcc5-e3fb709eab0a 2026-03-10T13:43:27.647 INFO:teuthology.orchestra.run.vm00.stdout: mon_host = [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] [v2:192.168.123.108:3300/0,v1:192.168.123.108:6789/0] 2026-03-10T13:43:27.695 INFO:tasks.cephadm:Distributing (final) config and client.admin keyring... 2026-03-10T13:43:27.695 DEBUG:teuthology.orchestra.run.vm00:> set -ex 2026-03-10T13:43:27.695 DEBUG:teuthology.orchestra.run.vm00:> sudo dd of=/etc/ceph/ceph.conf 2026-03-10T13:43:27.745 DEBUG:teuthology.orchestra.run.vm00:> set -ex 2026-03-10T13:43:27.745 DEBUG:teuthology.orchestra.run.vm00:> sudo dd of=/etc/ceph/ceph.client.admin.keyring 2026-03-10T13:43:27.793 DEBUG:teuthology.orchestra.run.vm07:> set -ex 2026-03-10T13:43:27.793 DEBUG:teuthology.orchestra.run.vm07:> sudo dd of=/etc/ceph/ceph.conf 2026-03-10T13:43:27.800 DEBUG:teuthology.orchestra.run.vm07:> set -ex 2026-03-10T13:43:27.801 DEBUG:teuthology.orchestra.run.vm07:> sudo dd of=/etc/ceph/ceph.client.admin.keyring 2026-03-10T13:43:27.850 DEBUG:teuthology.orchestra.run.vm08:> set -ex 2026-03-10T13:43:27.850 DEBUG:teuthology.orchestra.run.vm08:> sudo dd of=/etc/ceph/ceph.conf 2026-03-10T13:43:27.857 DEBUG:teuthology.orchestra.run.vm08:> set -ex 2026-03-10T13:43:27.857 DEBUG:teuthology.orchestra.run.vm08:> sudo dd of=/etc/ceph/ceph.client.admin.keyring 2026-03-10T13:43:27.905 INFO:tasks.cephadm:Adding mgr.a on vm00 2026-03-10T13:43:27.905 INFO:tasks.cephadm:Adding mgr.b on vm07 2026-03-10T13:43:27.905 DEBUG:teuthology.orchestra.run.vm08:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid c9620084-1c86-11f1-bcc5-e3fb709eab0a -- ceph orch apply mgr '2;vm00=a;vm07=b' 2026-03-10T13:43:28.249 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:43:28 vm07 bash[23044]: cluster 2026-03-10T13:43:26.203650+0000 mgr.a (mgr.14150) 65 : cluster [DBG] pgmap v22: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T13:43:28.249 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:43:28 vm07 bash[23044]: cluster 2026-03-10T13:43:26.203650+0000 mgr.a (mgr.14150) 65 : cluster [DBG] pgmap v22: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T13:43:28.249 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:43:28 vm07 bash[23044]: audit 2026-03-10T13:43:27.647917+0000 mon.a (mon.0) 253 : audit [DBG] from='client.? 192.168.123.100:0/2616816136' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T13:43:28.249 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:43:28 vm07 bash[23044]: audit 2026-03-10T13:43:27.647917+0000 mon.a (mon.0) 253 : audit [DBG] from='client.? 192.168.123.100:0/2616816136' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T13:43:28.337 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:28 vm08 bash[23387]: cluster 2026-03-10T13:43:26.203650+0000 mgr.a (mgr.14150) 65 : cluster [DBG] pgmap v22: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T13:43:28.337 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:28 vm08 bash[23387]: cluster 2026-03-10T13:43:26.203650+0000 mgr.a (mgr.14150) 65 : cluster [DBG] pgmap v22: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T13:43:28.337 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:28 vm08 bash[23387]: audit 2026-03-10T13:43:27.647917+0000 mon.a (mon.0) 253 : audit [DBG] from='client.? 192.168.123.100:0/2616816136' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T13:43:28.337 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:28 vm08 bash[23387]: audit 2026-03-10T13:43:27.647917+0000 mon.a (mon.0) 253 : audit [DBG] from='client.? 192.168.123.100:0/2616816136' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T13:43:28.466 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:43:28 vm00 bash[20748]: cluster 2026-03-10T13:43:26.203650+0000 mgr.a (mgr.14150) 65 : cluster [DBG] pgmap v22: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T13:43:28.466 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:43:28 vm00 bash[20748]: cluster 2026-03-10T13:43:26.203650+0000 mgr.a (mgr.14150) 65 : cluster [DBG] pgmap v22: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T13:43:28.466 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:43:28 vm00 bash[20748]: audit 2026-03-10T13:43:27.647917+0000 mon.a (mon.0) 253 : audit [DBG] from='client.? 192.168.123.100:0/2616816136' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T13:43:28.466 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:43:28 vm00 bash[20748]: audit 2026-03-10T13:43:27.647917+0000 mon.a (mon.0) 253 : audit [DBG] from='client.? 192.168.123.100:0/2616816136' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T13:43:30.337 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:30 vm08 bash[23387]: cluster 2026-03-10T13:43:28.203788+0000 mgr.a (mgr.14150) 66 : cluster [DBG] pgmap v23: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T13:43:30.337 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:30 vm08 bash[23387]: cluster 2026-03-10T13:43:28.203788+0000 mgr.a (mgr.14150) 66 : cluster [DBG] pgmap v23: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T13:43:30.466 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:43:30 vm00 bash[20748]: cluster 2026-03-10T13:43:28.203788+0000 mgr.a (mgr.14150) 66 : cluster [DBG] pgmap v23: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T13:43:30.466 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:43:30 vm00 bash[20748]: cluster 2026-03-10T13:43:28.203788+0000 mgr.a (mgr.14150) 66 : cluster [DBG] pgmap v23: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T13:43:30.499 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:43:30 vm07 bash[23044]: cluster 2026-03-10T13:43:28.203788+0000 mgr.a (mgr.14150) 66 : cluster [DBG] pgmap v23: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T13:43:30.499 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:43:30 vm07 bash[23044]: cluster 2026-03-10T13:43:28.203788+0000 mgr.a (mgr.14150) 66 : cluster [DBG] pgmap v23: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T13:43:31.546 INFO:teuthology.orchestra.run.vm08.stderr:Inferring config /var/lib/ceph/c9620084-1c86-11f1-bcc5-e3fb709eab0a/mon.c/config 2026-03-10T13:43:31.790 INFO:teuthology.orchestra.run.vm08.stdout:Scheduled mgr update... 2026-03-10T13:43:31.838 DEBUG:teuthology.orchestra.run.vm07:mgr.b> sudo journalctl -f -n 0 -u ceph-c9620084-1c86-11f1-bcc5-e3fb709eab0a@mgr.b.service 2026-03-10T13:43:31.840 INFO:tasks.cephadm:Deploying OSDs... 2026-03-10T13:43:31.840 DEBUG:teuthology.orchestra.run.vm00:> set -ex 2026-03-10T13:43:31.840 DEBUG:teuthology.orchestra.run.vm00:> dd if=/scratch_devs of=/dev/stdout 2026-03-10T13:43:31.843 DEBUG:teuthology.orchestra.run:got remote process result: 1 2026-03-10T13:43:31.843 DEBUG:teuthology.orchestra.run.vm00:> ls /dev/[sv]d? 2026-03-10T13:43:31.887 INFO:teuthology.orchestra.run.vm00.stdout:/dev/vda 2026-03-10T13:43:31.888 INFO:teuthology.orchestra.run.vm00.stdout:/dev/vdb 2026-03-10T13:43:31.888 INFO:teuthology.orchestra.run.vm00.stdout:/dev/vdc 2026-03-10T13:43:31.888 INFO:teuthology.orchestra.run.vm00.stdout:/dev/vdd 2026-03-10T13:43:31.888 INFO:teuthology.orchestra.run.vm00.stdout:/dev/vde 2026-03-10T13:43:31.888 WARNING:teuthology.misc:Removing root device: /dev/vda from device list 2026-03-10T13:43:31.888 DEBUG:teuthology.misc:devs=['/dev/vdb', '/dev/vdc', '/dev/vdd', '/dev/vde'] 2026-03-10T13:43:31.888 DEBUG:teuthology.orchestra.run.vm00:> stat /dev/vdb 2026-03-10T13:43:31.932 INFO:teuthology.orchestra.run.vm00.stdout: File: /dev/vdb 2026-03-10T13:43:31.932 INFO:teuthology.orchestra.run.vm00.stdout: Size: 0 Blocks: 0 IO Block: 4096 block special file 2026-03-10T13:43:31.932 INFO:teuthology.orchestra.run.vm00.stdout:Device: 5h/5d Inode: 24 Links: 1 Device type: fe,10 2026-03-10T13:43:31.932 INFO:teuthology.orchestra.run.vm00.stdout:Access: (0660/brw-rw----) Uid: ( 0/ root) Gid: ( 6/ disk) 2026-03-10T13:43:31.932 INFO:teuthology.orchestra.run.vm00.stdout:Access: 2026-03-10 13:36:48.959903830 +0000 2026-03-10T13:43:31.932 INFO:teuthology.orchestra.run.vm00.stdout:Modify: 2026-03-10 13:36:47.951903830 +0000 2026-03-10T13:43:31.932 INFO:teuthology.orchestra.run.vm00.stdout:Change: 2026-03-10 13:36:47.951903830 +0000 2026-03-10T13:43:31.932 INFO:teuthology.orchestra.run.vm00.stdout: Birth: - 2026-03-10T13:43:31.932 DEBUG:teuthology.orchestra.run.vm00:> sudo dd if=/dev/vdb of=/dev/null count=1 2026-03-10T13:43:31.980 INFO:teuthology.orchestra.run.vm00.stderr:1+0 records in 2026-03-10T13:43:31.980 INFO:teuthology.orchestra.run.vm00.stderr:1+0 records out 2026-03-10T13:43:31.980 INFO:teuthology.orchestra.run.vm00.stderr:512 bytes copied, 0.000139801 s, 3.7 MB/s 2026-03-10T13:43:31.981 DEBUG:teuthology.orchestra.run.vm00:> ! mount | grep -v devtmpfs | grep -q /dev/vdb 2026-03-10T13:43:32.030 DEBUG:teuthology.orchestra.run.vm00:> stat /dev/vdc 2026-03-10T13:43:32.076 INFO:teuthology.orchestra.run.vm00.stdout: File: /dev/vdc 2026-03-10T13:43:32.076 INFO:teuthology.orchestra.run.vm00.stdout: Size: 0 Blocks: 0 IO Block: 4096 block special file 2026-03-10T13:43:32.076 INFO:teuthology.orchestra.run.vm00.stdout:Device: 5h/5d Inode: 25 Links: 1 Device type: fe,20 2026-03-10T13:43:32.076 INFO:teuthology.orchestra.run.vm00.stdout:Access: (0660/brw-rw----) Uid: ( 0/ root) Gid: ( 6/ disk) 2026-03-10T13:43:32.076 INFO:teuthology.orchestra.run.vm00.stdout:Access: 2026-03-10 13:36:48.967903830 +0000 2026-03-10T13:43:32.076 INFO:teuthology.orchestra.run.vm00.stdout:Modify: 2026-03-10 13:36:47.939903830 +0000 2026-03-10T13:43:32.076 INFO:teuthology.orchestra.run.vm00.stdout:Change: 2026-03-10 13:36:47.939903830 +0000 2026-03-10T13:43:32.076 INFO:teuthology.orchestra.run.vm00.stdout: Birth: - 2026-03-10T13:43:32.076 DEBUG:teuthology.orchestra.run.vm00:> sudo dd if=/dev/vdc of=/dev/null count=1 2026-03-10T13:43:32.098 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:43:32 vm07 bash[23044]: cluster 2026-03-10T13:43:30.203955+0000 mgr.a (mgr.14150) 67 : cluster [DBG] pgmap v24: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T13:43:32.098 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:43:32 vm07 bash[23044]: cluster 2026-03-10T13:43:30.203955+0000 mgr.a (mgr.14150) 67 : cluster [DBG] pgmap v24: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T13:43:32.098 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:43:32 vm07 bash[23044]: audit 2026-03-10T13:43:31.790275+0000 mon.a (mon.0) 254 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:43:32.098 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:43:32 vm07 bash[23044]: audit 2026-03-10T13:43:31.790275+0000 mon.a (mon.0) 254 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:43:32.098 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:43:32 vm07 bash[23044]: audit 2026-03-10T13:43:31.791090+0000 mon.a (mon.0) 255 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T13:43:32.098 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:43:32 vm07 bash[23044]: audit 2026-03-10T13:43:31.791090+0000 mon.a (mon.0) 255 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T13:43:32.098 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:43:32 vm07 bash[23044]: audit 2026-03-10T13:43:31.791914+0000 mon.a (mon.0) 256 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T13:43:32.099 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:43:32 vm07 bash[23044]: audit 2026-03-10T13:43:31.791914+0000 mon.a (mon.0) 256 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T13:43:32.099 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:43:32 vm07 bash[23044]: audit 2026-03-10T13:43:31.792262+0000 mon.a (mon.0) 257 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T13:43:32.099 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:43:32 vm07 bash[23044]: audit 2026-03-10T13:43:31.792262+0000 mon.a (mon.0) 257 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T13:43:32.099 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:43:32 vm07 bash[23044]: audit 2026-03-10T13:43:31.795356+0000 mon.a (mon.0) 258 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:43:32.099 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:43:32 vm07 bash[23044]: audit 2026-03-10T13:43:31.795356+0000 mon.a (mon.0) 258 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:43:32.099 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:43:32 vm07 bash[23044]: audit 2026-03-10T13:43:31.796448+0000 mon.a (mon.0) 259 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.b", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch 2026-03-10T13:43:32.099 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:43:32 vm07 bash[23044]: audit 2026-03-10T13:43:31.796448+0000 mon.a (mon.0) 259 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.b", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch 2026-03-10T13:43:32.099 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:43:32 vm07 bash[23044]: audit 2026-03-10T13:43:31.798153+0000 mon.a (mon.0) 260 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd='[{"prefix": "auth get-or-create", "entity": "mgr.b", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]': finished 2026-03-10T13:43:32.099 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:43:32 vm07 bash[23044]: audit 2026-03-10T13:43:31.798153+0000 mon.a (mon.0) 260 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd='[{"prefix": "auth get-or-create", "entity": "mgr.b", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]': finished 2026-03-10T13:43:32.099 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:43:32 vm07 bash[23044]: audit 2026-03-10T13:43:31.799606+0000 mon.a (mon.0) 261 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "mgr services"}]: dispatch 2026-03-10T13:43:32.099 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:43:32 vm07 bash[23044]: audit 2026-03-10T13:43:31.799606+0000 mon.a (mon.0) 261 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "mgr services"}]: dispatch 2026-03-10T13:43:32.099 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:43:32 vm07 bash[23044]: audit 2026-03-10T13:43:31.800002+0000 mon.a (mon.0) 262 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T13:43:32.099 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:43:32 vm07 bash[23044]: audit 2026-03-10T13:43:31.800002+0000 mon.a (mon.0) 262 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T13:43:32.124 INFO:teuthology.orchestra.run.vm00.stderr:1+0 records in 2026-03-10T13:43:32.124 INFO:teuthology.orchestra.run.vm00.stderr:1+0 records out 2026-03-10T13:43:32.124 INFO:teuthology.orchestra.run.vm00.stderr:512 bytes copied, 0.00015514 s, 3.3 MB/s 2026-03-10T13:43:32.125 DEBUG:teuthology.orchestra.run.vm00:> ! mount | grep -v devtmpfs | grep -q /dev/vdc 2026-03-10T13:43:32.169 DEBUG:teuthology.orchestra.run.vm00:> stat /dev/vdd 2026-03-10T13:43:32.216 INFO:teuthology.orchestra.run.vm00.stdout: File: /dev/vdd 2026-03-10T13:43:32.216 INFO:teuthology.orchestra.run.vm00.stdout: Size: 0 Blocks: 0 IO Block: 4096 block special file 2026-03-10T13:43:32.216 INFO:teuthology.orchestra.run.vm00.stdout:Device: 5h/5d Inode: 26 Links: 1 Device type: fe,30 2026-03-10T13:43:32.216 INFO:teuthology.orchestra.run.vm00.stdout:Access: (0660/brw-rw----) Uid: ( 0/ root) Gid: ( 6/ disk) 2026-03-10T13:43:32.216 INFO:teuthology.orchestra.run.vm00.stdout:Access: 2026-03-10 13:36:48.959903830 +0000 2026-03-10T13:43:32.216 INFO:teuthology.orchestra.run.vm00.stdout:Modify: 2026-03-10 13:36:47.939903830 +0000 2026-03-10T13:43:32.217 INFO:teuthology.orchestra.run.vm00.stdout:Change: 2026-03-10 13:36:47.939903830 +0000 2026-03-10T13:43:32.217 INFO:teuthology.orchestra.run.vm00.stdout: Birth: - 2026-03-10T13:43:32.217 DEBUG:teuthology.orchestra.run.vm00:> sudo dd if=/dev/vdd of=/dev/null count=1 2026-03-10T13:43:32.263 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:43:32 vm00 bash[20748]: cluster 2026-03-10T13:43:30.203955+0000 mgr.a (mgr.14150) 67 : cluster [DBG] pgmap v24: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T13:43:32.263 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:43:32 vm00 bash[20748]: cluster 2026-03-10T13:43:30.203955+0000 mgr.a (mgr.14150) 67 : cluster [DBG] pgmap v24: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T13:43:32.263 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:43:32 vm00 bash[20748]: audit 2026-03-10T13:43:31.790275+0000 mon.a (mon.0) 254 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:43:32.263 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:43:32 vm00 bash[20748]: audit 2026-03-10T13:43:31.790275+0000 mon.a (mon.0) 254 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:43:32.263 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:43:32 vm00 bash[20748]: audit 2026-03-10T13:43:31.791090+0000 mon.a (mon.0) 255 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T13:43:32.263 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:43:32 vm00 bash[20748]: audit 2026-03-10T13:43:31.791090+0000 mon.a (mon.0) 255 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T13:43:32.263 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:43:32 vm00 bash[20748]: audit 2026-03-10T13:43:31.791914+0000 mon.a (mon.0) 256 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T13:43:32.263 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:43:32 vm00 bash[20748]: audit 2026-03-10T13:43:31.791914+0000 mon.a (mon.0) 256 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T13:43:32.263 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:43:32 vm00 bash[20748]: audit 2026-03-10T13:43:31.792262+0000 mon.a (mon.0) 257 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T13:43:32.263 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:43:32 vm00 bash[20748]: audit 2026-03-10T13:43:31.792262+0000 mon.a (mon.0) 257 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T13:43:32.263 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:43:32 vm00 bash[20748]: audit 2026-03-10T13:43:31.795356+0000 mon.a (mon.0) 258 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:43:32.263 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:43:32 vm00 bash[20748]: audit 2026-03-10T13:43:31.795356+0000 mon.a (mon.0) 258 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:43:32.263 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:43:32 vm00 bash[20748]: audit 2026-03-10T13:43:31.796448+0000 mon.a (mon.0) 259 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.b", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch 2026-03-10T13:43:32.263 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:43:32 vm00 bash[20748]: audit 2026-03-10T13:43:31.796448+0000 mon.a (mon.0) 259 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.b", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch 2026-03-10T13:43:32.263 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:43:32 vm00 bash[20748]: audit 2026-03-10T13:43:31.798153+0000 mon.a (mon.0) 260 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd='[{"prefix": "auth get-or-create", "entity": "mgr.b", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]': finished 2026-03-10T13:43:32.263 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:43:32 vm00 bash[20748]: audit 2026-03-10T13:43:31.798153+0000 mon.a (mon.0) 260 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd='[{"prefix": "auth get-or-create", "entity": "mgr.b", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]': finished 2026-03-10T13:43:32.263 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:43:32 vm00 bash[20748]: audit 2026-03-10T13:43:31.799606+0000 mon.a (mon.0) 261 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "mgr services"}]: dispatch 2026-03-10T13:43:32.263 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:43:32 vm00 bash[20748]: audit 2026-03-10T13:43:31.799606+0000 mon.a (mon.0) 261 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "mgr services"}]: dispatch 2026-03-10T13:43:32.263 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:43:32 vm00 bash[20748]: audit 2026-03-10T13:43:31.800002+0000 mon.a (mon.0) 262 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T13:43:32.263 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:43:32 vm00 bash[20748]: audit 2026-03-10T13:43:31.800002+0000 mon.a (mon.0) 262 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T13:43:32.264 INFO:teuthology.orchestra.run.vm00.stderr:1+0 records in 2026-03-10T13:43:32.264 INFO:teuthology.orchestra.run.vm00.stderr:1+0 records out 2026-03-10T13:43:32.264 INFO:teuthology.orchestra.run.vm00.stderr:512 bytes copied, 0.000163225 s, 3.1 MB/s 2026-03-10T13:43:32.265 DEBUG:teuthology.orchestra.run.vm00:> ! mount | grep -v devtmpfs | grep -q /dev/vdd 2026-03-10T13:43:32.310 DEBUG:teuthology.orchestra.run.vm00:> stat /dev/vde 2026-03-10T13:43:32.337 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:32 vm08 bash[23387]: cluster 2026-03-10T13:43:30.203955+0000 mgr.a (mgr.14150) 67 : cluster [DBG] pgmap v24: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T13:43:32.337 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:32 vm08 bash[23387]: cluster 2026-03-10T13:43:30.203955+0000 mgr.a (mgr.14150) 67 : cluster [DBG] pgmap v24: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T13:43:32.337 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:32 vm08 bash[23387]: audit 2026-03-10T13:43:31.790275+0000 mon.a (mon.0) 254 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:43:32.337 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:32 vm08 bash[23387]: audit 2026-03-10T13:43:31.790275+0000 mon.a (mon.0) 254 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:43:32.337 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:32 vm08 bash[23387]: audit 2026-03-10T13:43:31.791090+0000 mon.a (mon.0) 255 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T13:43:32.337 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:32 vm08 bash[23387]: audit 2026-03-10T13:43:31.791090+0000 mon.a (mon.0) 255 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T13:43:32.337 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:32 vm08 bash[23387]: audit 2026-03-10T13:43:31.791914+0000 mon.a (mon.0) 256 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T13:43:32.337 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:32 vm08 bash[23387]: audit 2026-03-10T13:43:31.791914+0000 mon.a (mon.0) 256 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T13:43:32.337 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:32 vm08 bash[23387]: audit 2026-03-10T13:43:31.792262+0000 mon.a (mon.0) 257 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T13:43:32.337 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:32 vm08 bash[23387]: audit 2026-03-10T13:43:31.792262+0000 mon.a (mon.0) 257 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T13:43:32.338 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:32 vm08 bash[23387]: audit 2026-03-10T13:43:31.795356+0000 mon.a (mon.0) 258 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:43:32.338 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:32 vm08 bash[23387]: audit 2026-03-10T13:43:31.795356+0000 mon.a (mon.0) 258 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:43:32.338 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:32 vm08 bash[23387]: audit 2026-03-10T13:43:31.796448+0000 mon.a (mon.0) 259 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.b", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch 2026-03-10T13:43:32.338 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:32 vm08 bash[23387]: audit 2026-03-10T13:43:31.796448+0000 mon.a (mon.0) 259 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.b", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch 2026-03-10T13:43:32.338 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:32 vm08 bash[23387]: audit 2026-03-10T13:43:31.798153+0000 mon.a (mon.0) 260 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd='[{"prefix": "auth get-or-create", "entity": "mgr.b", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]': finished 2026-03-10T13:43:32.338 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:32 vm08 bash[23387]: audit 2026-03-10T13:43:31.798153+0000 mon.a (mon.0) 260 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd='[{"prefix": "auth get-or-create", "entity": "mgr.b", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]': finished 2026-03-10T13:43:32.338 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:32 vm08 bash[23387]: audit 2026-03-10T13:43:31.799606+0000 mon.a (mon.0) 261 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "mgr services"}]: dispatch 2026-03-10T13:43:32.338 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:32 vm08 bash[23387]: audit 2026-03-10T13:43:31.799606+0000 mon.a (mon.0) 261 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "mgr services"}]: dispatch 2026-03-10T13:43:32.338 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:32 vm08 bash[23387]: audit 2026-03-10T13:43:31.800002+0000 mon.a (mon.0) 262 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T13:43:32.338 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:32 vm08 bash[23387]: audit 2026-03-10T13:43:31.800002+0000 mon.a (mon.0) 262 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T13:43:32.356 INFO:teuthology.orchestra.run.vm00.stdout: File: /dev/vde 2026-03-10T13:43:32.356 INFO:teuthology.orchestra.run.vm00.stdout: Size: 0 Blocks: 0 IO Block: 4096 block special file 2026-03-10T13:43:32.356 INFO:teuthology.orchestra.run.vm00.stdout:Device: 5h/5d Inode: 27 Links: 1 Device type: fe,40 2026-03-10T13:43:32.356 INFO:teuthology.orchestra.run.vm00.stdout:Access: (0660/brw-rw----) Uid: ( 0/ root) Gid: ( 6/ disk) 2026-03-10T13:43:32.356 INFO:teuthology.orchestra.run.vm00.stdout:Access: 2026-03-10 13:36:48.963903830 +0000 2026-03-10T13:43:32.356 INFO:teuthology.orchestra.run.vm00.stdout:Modify: 2026-03-10 13:36:47.939903830 +0000 2026-03-10T13:43:32.356 INFO:teuthology.orchestra.run.vm00.stdout:Change: 2026-03-10 13:36:47.939903830 +0000 2026-03-10T13:43:32.356 INFO:teuthology.orchestra.run.vm00.stdout: Birth: - 2026-03-10T13:43:32.356 DEBUG:teuthology.orchestra.run.vm00:> sudo dd if=/dev/vde of=/dev/null count=1 2026-03-10T13:43:32.403 INFO:teuthology.orchestra.run.vm00.stderr:1+0 records in 2026-03-10T13:43:32.403 INFO:teuthology.orchestra.run.vm00.stderr:1+0 records out 2026-03-10T13:43:32.403 INFO:teuthology.orchestra.run.vm00.stderr:512 bytes copied, 0.000164317 s, 3.1 MB/s 2026-03-10T13:43:32.404 DEBUG:teuthology.orchestra.run.vm00:> ! mount | grep -v devtmpfs | grep -q /dev/vde 2026-03-10T13:43:32.449 DEBUG:teuthology.orchestra.run.vm07:> set -ex 2026-03-10T13:43:32.449 DEBUG:teuthology.orchestra.run.vm07:> dd if=/scratch_devs of=/dev/stdout 2026-03-10T13:43:32.451 DEBUG:teuthology.orchestra.run:got remote process result: 1 2026-03-10T13:43:32.451 DEBUG:teuthology.orchestra.run.vm07:> ls /dev/[sv]d? 2026-03-10T13:43:32.495 INFO:teuthology.orchestra.run.vm07.stdout:/dev/vda 2026-03-10T13:43:32.495 INFO:teuthology.orchestra.run.vm07.stdout:/dev/vdb 2026-03-10T13:43:32.495 INFO:teuthology.orchestra.run.vm07.stdout:/dev/vdc 2026-03-10T13:43:32.495 INFO:teuthology.orchestra.run.vm07.stdout:/dev/vdd 2026-03-10T13:43:32.495 INFO:teuthology.orchestra.run.vm07.stdout:/dev/vde 2026-03-10T13:43:32.495 WARNING:teuthology.misc:Removing root device: /dev/vda from device list 2026-03-10T13:43:32.495 DEBUG:teuthology.misc:devs=['/dev/vdb', '/dev/vdc', '/dev/vdd', '/dev/vde'] 2026-03-10T13:43:32.495 DEBUG:teuthology.orchestra.run.vm07:> stat /dev/vdb 2026-03-10T13:43:32.527 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:43:32 vm07 systemd[1]: /etc/systemd/system/ceph-c9620084-1c86-11f1-bcc5-e3fb709eab0a@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T13:43:32.528 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:43:32 vm07 systemd[1]: /etc/systemd/system/ceph-c9620084-1c86-11f1-bcc5-e3fb709eab0a@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T13:43:32.528 INFO:journalctl@ceph.mgr.b.vm07.stdout:Mar 10 13:43:32 vm07 systemd[1]: /etc/systemd/system/ceph-c9620084-1c86-11f1-bcc5-e3fb709eab0a@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T13:43:32.531 INFO:teuthology.orchestra.run.vm07.stdout: File: /dev/vdb 2026-03-10T13:43:32.531 INFO:teuthology.orchestra.run.vm07.stdout: Size: 0 Blocks: 0 IO Block: 4096 block special file 2026-03-10T13:43:32.531 INFO:teuthology.orchestra.run.vm07.stdout:Device: 5h/5d Inode: 24 Links: 1 Device type: fe,10 2026-03-10T13:43:32.531 INFO:teuthology.orchestra.run.vm07.stdout:Access: (0660/brw-rw----) Uid: ( 0/ root) Gid: ( 6/ disk) 2026-03-10T13:43:32.531 INFO:teuthology.orchestra.run.vm07.stdout:Access: 2026-03-10 13:35:58.837613348 +0000 2026-03-10T13:43:32.531 INFO:teuthology.orchestra.run.vm07.stdout:Modify: 2026-03-10 13:35:57.753613348 +0000 2026-03-10T13:43:32.531 INFO:teuthology.orchestra.run.vm07.stdout:Change: 2026-03-10 13:35:57.753613348 +0000 2026-03-10T13:43:32.531 INFO:teuthology.orchestra.run.vm07.stdout: Birth: - 2026-03-10T13:43:32.531 DEBUG:teuthology.orchestra.run.vm07:> sudo dd if=/dev/vdb of=/dev/null count=1 2026-03-10T13:43:32.578 INFO:teuthology.orchestra.run.vm07.stderr:1+0 records in 2026-03-10T13:43:32.578 INFO:teuthology.orchestra.run.vm07.stderr:1+0 records out 2026-03-10T13:43:32.578 INFO:teuthology.orchestra.run.vm07.stderr:512 bytes copied, 0.000626813 s, 817 kB/s 2026-03-10T13:43:32.579 DEBUG:teuthology.orchestra.run.vm07:> ! mount | grep -v devtmpfs | grep -q /dev/vdb 2026-03-10T13:43:32.630 DEBUG:teuthology.orchestra.run.vm07:> stat /dev/vdc 2026-03-10T13:43:32.680 INFO:teuthology.orchestra.run.vm07.stdout: File: /dev/vdc 2026-03-10T13:43:32.680 INFO:teuthology.orchestra.run.vm07.stdout: Size: 0 Blocks: 0 IO Block: 4096 block special file 2026-03-10T13:43:32.680 INFO:teuthology.orchestra.run.vm07.stdout:Device: 5h/5d Inode: 25 Links: 1 Device type: fe,20 2026-03-10T13:43:32.680 INFO:teuthology.orchestra.run.vm07.stdout:Access: (0660/brw-rw----) Uid: ( 0/ root) Gid: ( 6/ disk) 2026-03-10T13:43:32.680 INFO:teuthology.orchestra.run.vm07.stdout:Access: 2026-03-10 13:35:58.861613348 +0000 2026-03-10T13:43:32.681 INFO:teuthology.orchestra.run.vm07.stdout:Modify: 2026-03-10 13:35:57.801613348 +0000 2026-03-10T13:43:32.681 INFO:teuthology.orchestra.run.vm07.stdout:Change: 2026-03-10 13:35:57.801613348 +0000 2026-03-10T13:43:32.681 INFO:teuthology.orchestra.run.vm07.stdout: Birth: - 2026-03-10T13:43:32.688 DEBUG:teuthology.orchestra.run.vm07:> sudo dd if=/dev/vdc of=/dev/null count=1 2026-03-10T13:43:32.753 INFO:teuthology.orchestra.run.vm07.stderr:1+0 records in 2026-03-10T13:43:32.753 INFO:teuthology.orchestra.run.vm07.stderr:1+0 records out 2026-03-10T13:43:32.753 INFO:teuthology.orchestra.run.vm07.stderr:512 bytes copied, 0.000150932 s, 3.4 MB/s 2026-03-10T13:43:32.757 DEBUG:teuthology.orchestra.run.vm07:> ! mount | grep -v devtmpfs | grep -q /dev/vdc 2026-03-10T13:43:32.822 DEBUG:teuthology.orchestra.run.vm07:> stat /dev/vdd 2026-03-10T13:43:32.871 INFO:teuthology.orchestra.run.vm07.stdout: File: /dev/vdd 2026-03-10T13:43:32.871 INFO:teuthology.orchestra.run.vm07.stdout: Size: 0 Blocks: 0 IO Block: 4096 block special file 2026-03-10T13:43:32.871 INFO:teuthology.orchestra.run.vm07.stdout:Device: 5h/5d Inode: 26 Links: 1 Device type: fe,30 2026-03-10T13:43:32.871 INFO:teuthology.orchestra.run.vm07.stdout:Access: (0660/brw-rw----) Uid: ( 0/ root) Gid: ( 6/ disk) 2026-03-10T13:43:32.871 INFO:teuthology.orchestra.run.vm07.stdout:Access: 2026-03-10 13:35:58.837613348 +0000 2026-03-10T13:43:32.871 INFO:teuthology.orchestra.run.vm07.stdout:Modify: 2026-03-10 13:35:57.761613348 +0000 2026-03-10T13:43:32.871 INFO:teuthology.orchestra.run.vm07.stdout:Change: 2026-03-10 13:35:57.761613348 +0000 2026-03-10T13:43:32.871 INFO:teuthology.orchestra.run.vm07.stdout: Birth: - 2026-03-10T13:43:32.871 DEBUG:teuthology.orchestra.run.vm07:> sudo dd if=/dev/vdd of=/dev/null count=1 2026-03-10T13:43:32.888 INFO:journalctl@ceph.mgr.b.vm07.stdout:Mar 10 13:43:32 vm07 systemd[1]: Started Ceph mgr.b for c9620084-1c86-11f1-bcc5-e3fb709eab0a. 2026-03-10T13:43:32.888 INFO:journalctl@ceph.mgr.b.vm07.stdout:Mar 10 13:43:32 vm07 bash[23484]: debug 2026-03-10T13:43:32.738+0000 7f3ae822a140 -1 mgr[py] Module status has missing NOTIFY_TYPES member 2026-03-10T13:43:32.888 INFO:journalctl@ceph.mgr.b.vm07.stdout:Mar 10 13:43:32 vm07 bash[23484]: debug 2026-03-10T13:43:32.774+0000 7f3ae822a140 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member 2026-03-10T13:43:32.894 INFO:teuthology.orchestra.run.vm07.stderr:1+0 records in 2026-03-10T13:43:32.894 INFO:teuthology.orchestra.run.vm07.stderr:1+0 records out 2026-03-10T13:43:32.894 INFO:teuthology.orchestra.run.vm07.stderr:512 bytes copied, 0.000155742 s, 3.3 MB/s 2026-03-10T13:43:32.894 DEBUG:teuthology.orchestra.run.vm07:> ! mount | grep -v devtmpfs | grep -q /dev/vdd 2026-03-10T13:43:32.940 DEBUG:teuthology.orchestra.run.vm07:> stat /dev/vde 2026-03-10T13:43:32.986 INFO:teuthology.orchestra.run.vm07.stdout: File: /dev/vde 2026-03-10T13:43:32.987 INFO:teuthology.orchestra.run.vm07.stdout: Size: 0 Blocks: 0 IO Block: 4096 block special file 2026-03-10T13:43:32.987 INFO:teuthology.orchestra.run.vm07.stdout:Device: 5h/5d Inode: 27 Links: 1 Device type: fe,40 2026-03-10T13:43:32.987 INFO:teuthology.orchestra.run.vm07.stdout:Access: (0660/brw-rw----) Uid: ( 0/ root) Gid: ( 6/ disk) 2026-03-10T13:43:32.987 INFO:teuthology.orchestra.run.vm07.stdout:Access: 2026-03-10 13:35:58.857613348 +0000 2026-03-10T13:43:32.987 INFO:teuthology.orchestra.run.vm07.stdout:Modify: 2026-03-10 13:35:57.801613348 +0000 2026-03-10T13:43:32.987 INFO:teuthology.orchestra.run.vm07.stdout:Change: 2026-03-10 13:35:57.801613348 +0000 2026-03-10T13:43:32.987 INFO:teuthology.orchestra.run.vm07.stdout: Birth: - 2026-03-10T13:43:32.987 DEBUG:teuthology.orchestra.run.vm07:> sudo dd if=/dev/vde of=/dev/null count=1 2026-03-10T13:43:33.034 INFO:teuthology.orchestra.run.vm07.stderr:1+0 records in 2026-03-10T13:43:33.034 INFO:teuthology.orchestra.run.vm07.stderr:1+0 records out 2026-03-10T13:43:33.034 INFO:teuthology.orchestra.run.vm07.stderr:512 bytes copied, 0.000136976 s, 3.7 MB/s 2026-03-10T13:43:33.034 DEBUG:teuthology.orchestra.run.vm07:> ! mount | grep -v devtmpfs | grep -q /dev/vde 2026-03-10T13:43:33.085 DEBUG:teuthology.orchestra.run.vm08:> set -ex 2026-03-10T13:43:33.085 DEBUG:teuthology.orchestra.run.vm08:> dd if=/scratch_devs of=/dev/stdout 2026-03-10T13:43:33.088 DEBUG:teuthology.orchestra.run:got remote process result: 1 2026-03-10T13:43:33.088 DEBUG:teuthology.orchestra.run.vm08:> ls /dev/[sv]d? 2026-03-10T13:43:33.133 INFO:teuthology.orchestra.run.vm08.stdout:/dev/vda 2026-03-10T13:43:33.133 INFO:teuthology.orchestra.run.vm08.stdout:/dev/vdb 2026-03-10T13:43:33.133 INFO:teuthology.orchestra.run.vm08.stdout:/dev/vdc 2026-03-10T13:43:33.133 INFO:teuthology.orchestra.run.vm08.stdout:/dev/vdd 2026-03-10T13:43:33.133 INFO:teuthology.orchestra.run.vm08.stdout:/dev/vde 2026-03-10T13:43:33.133 WARNING:teuthology.misc:Removing root device: /dev/vda from device list 2026-03-10T13:43:33.133 DEBUG:teuthology.misc:devs=['/dev/vdb', '/dev/vdc', '/dev/vdd', '/dev/vde'] 2026-03-10T13:43:33.133 DEBUG:teuthology.orchestra.run.vm08:> stat /dev/vdb 2026-03-10T13:43:33.174 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:43:33 vm07 bash[23044]: audit 2026-03-10T13:43:31.785871+0000 mgr.a (mgr.14150) 68 : audit [DBG] from='client.24109 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mgr", "placement": "2;vm00=a;vm07=b", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T13:43:33.174 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:43:33 vm07 bash[23044]: audit 2026-03-10T13:43:31.785871+0000 mgr.a (mgr.14150) 68 : audit [DBG] from='client.24109 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mgr", "placement": "2;vm00=a;vm07=b", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T13:43:33.174 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:43:33 vm07 bash[23044]: cephadm 2026-03-10T13:43:31.786624+0000 mgr.a (mgr.14150) 69 : cephadm [INF] Saving service mgr spec with placement vm00=a;vm07=b;count:2 2026-03-10T13:43:33.174 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:43:33 vm07 bash[23044]: cephadm 2026-03-10T13:43:31.786624+0000 mgr.a (mgr.14150) 69 : cephadm [INF] Saving service mgr spec with placement vm00=a;vm07=b;count:2 2026-03-10T13:43:33.174 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:43:33 vm07 bash[23044]: cephadm 2026-03-10T13:43:31.800389+0000 mgr.a (mgr.14150) 70 : cephadm [INF] Deploying daemon mgr.b on vm07 2026-03-10T13:43:33.174 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:43:33 vm07 bash[23044]: cephadm 2026-03-10T13:43:31.800389+0000 mgr.a (mgr.14150) 70 : cephadm [INF] Deploying daemon mgr.b on vm07 2026-03-10T13:43:33.174 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:43:33 vm07 bash[23044]: audit 2026-03-10T13:43:32.557713+0000 mon.a (mon.0) 263 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:43:33.174 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:43:33 vm07 bash[23044]: audit 2026-03-10T13:43:32.557713+0000 mon.a (mon.0) 263 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:43:33.174 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:43:33 vm07 bash[23044]: audit 2026-03-10T13:43:32.564430+0000 mon.a (mon.0) 264 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:43:33.174 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:43:33 vm07 bash[23044]: audit 2026-03-10T13:43:32.564430+0000 mon.a (mon.0) 264 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:43:33.174 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:43:33 vm07 bash[23044]: audit 2026-03-10T13:43:32.568193+0000 mon.a (mon.0) 265 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:43:33.174 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:43:33 vm07 bash[23044]: audit 2026-03-10T13:43:32.568193+0000 mon.a (mon.0) 265 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:43:33.174 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:43:33 vm07 bash[23044]: audit 2026-03-10T13:43:32.571347+0000 mon.a (mon.0) 266 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:43:33.175 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:43:33 vm07 bash[23044]: audit 2026-03-10T13:43:32.571347+0000 mon.a (mon.0) 266 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:43:33.175 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:43:33 vm07 bash[23044]: audit 2026-03-10T13:43:32.586021+0000 mon.a (mon.0) 267 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T13:43:33.175 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:43:33 vm07 bash[23044]: audit 2026-03-10T13:43:32.586021+0000 mon.a (mon.0) 267 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T13:43:33.175 INFO:journalctl@ceph.mgr.b.vm07.stdout:Mar 10 13:43:32 vm07 bash[23484]: debug 2026-03-10T13:43:32.882+0000 7f3ae822a140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member 2026-03-10T13:43:33.175 INFO:journalctl@ceph.mgr.b.vm07.stdout:Mar 10 13:43:33 vm07 bash[23484]: debug 2026-03-10T13:43:33.170+0000 7f3ae822a140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member 2026-03-10T13:43:33.177 INFO:teuthology.orchestra.run.vm08.stdout: File: /dev/vdb 2026-03-10T13:43:33.177 INFO:teuthology.orchestra.run.vm08.stdout: Size: 0 Blocks: 0 IO Block: 4096 block special file 2026-03-10T13:43:33.177 INFO:teuthology.orchestra.run.vm08.stdout:Device: 5h/5d Inode: 24 Links: 1 Device type: fe,10 2026-03-10T13:43:33.177 INFO:teuthology.orchestra.run.vm08.stdout:Access: (0660/brw-rw----) Uid: ( 0/ root) Gid: ( 6/ disk) 2026-03-10T13:43:33.177 INFO:teuthology.orchestra.run.vm08.stdout:Access: 2026-03-10 13:36:23.656905423 +0000 2026-03-10T13:43:33.177 INFO:teuthology.orchestra.run.vm08.stdout:Modify: 2026-03-10 13:36:22.748905423 +0000 2026-03-10T13:43:33.177 INFO:teuthology.orchestra.run.vm08.stdout:Change: 2026-03-10 13:36:22.748905423 +0000 2026-03-10T13:43:33.177 INFO:teuthology.orchestra.run.vm08.stdout: Birth: - 2026-03-10T13:43:33.177 DEBUG:teuthology.orchestra.run.vm08:> sudo dd if=/dev/vdb of=/dev/null count=1 2026-03-10T13:43:33.225 INFO:teuthology.orchestra.run.vm08.stderr:1+0 records in 2026-03-10T13:43:33.225 INFO:teuthology.orchestra.run.vm08.stderr:1+0 records out 2026-03-10T13:43:33.225 INFO:teuthology.orchestra.run.vm08.stderr:512 bytes copied, 0.000118833 s, 4.3 MB/s 2026-03-10T13:43:33.225 DEBUG:teuthology.orchestra.run.vm08:> ! mount | grep -v devtmpfs | grep -q /dev/vdb 2026-03-10T13:43:33.271 DEBUG:teuthology.orchestra.run.vm08:> stat /dev/vdc 2026-03-10T13:43:33.318 INFO:teuthology.orchestra.run.vm08.stdout: File: /dev/vdc 2026-03-10T13:43:33.318 INFO:teuthology.orchestra.run.vm08.stdout: Size: 0 Blocks: 0 IO Block: 4096 block special file 2026-03-10T13:43:33.318 INFO:teuthology.orchestra.run.vm08.stdout:Device: 5h/5d Inode: 25 Links: 1 Device type: fe,20 2026-03-10T13:43:33.318 INFO:teuthology.orchestra.run.vm08.stdout:Access: (0660/brw-rw----) Uid: ( 0/ root) Gid: ( 6/ disk) 2026-03-10T13:43:33.318 INFO:teuthology.orchestra.run.vm08.stdout:Access: 2026-03-10 13:36:23.672905423 +0000 2026-03-10T13:43:33.318 INFO:teuthology.orchestra.run.vm08.stdout:Modify: 2026-03-10 13:36:22.748905423 +0000 2026-03-10T13:43:33.318 INFO:teuthology.orchestra.run.vm08.stdout:Change: 2026-03-10 13:36:22.748905423 +0000 2026-03-10T13:43:33.318 INFO:teuthology.orchestra.run.vm08.stdout: Birth: - 2026-03-10T13:43:33.318 DEBUG:teuthology.orchestra.run.vm08:> sudo dd if=/dev/vdc of=/dev/null count=1 2026-03-10T13:43:33.337 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:33 vm08 bash[23387]: audit 2026-03-10T13:43:31.785871+0000 mgr.a (mgr.14150) 68 : audit [DBG] from='client.24109 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mgr", "placement": "2;vm00=a;vm07=b", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T13:43:33.338 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:33 vm08 bash[23387]: audit 2026-03-10T13:43:31.785871+0000 mgr.a (mgr.14150) 68 : audit [DBG] from='client.24109 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mgr", "placement": "2;vm00=a;vm07=b", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T13:43:33.338 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:33 vm08 bash[23387]: cephadm 2026-03-10T13:43:31.786624+0000 mgr.a (mgr.14150) 69 : cephadm [INF] Saving service mgr spec with placement vm00=a;vm07=b;count:2 2026-03-10T13:43:33.338 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:33 vm08 bash[23387]: cephadm 2026-03-10T13:43:31.786624+0000 mgr.a (mgr.14150) 69 : cephadm [INF] Saving service mgr spec with placement vm00=a;vm07=b;count:2 2026-03-10T13:43:33.338 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:33 vm08 bash[23387]: cephadm 2026-03-10T13:43:31.800389+0000 mgr.a (mgr.14150) 70 : cephadm [INF] Deploying daemon mgr.b on vm07 2026-03-10T13:43:33.338 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:33 vm08 bash[23387]: cephadm 2026-03-10T13:43:31.800389+0000 mgr.a (mgr.14150) 70 : cephadm [INF] Deploying daemon mgr.b on vm07 2026-03-10T13:43:33.338 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:33 vm08 bash[23387]: audit 2026-03-10T13:43:32.557713+0000 mon.a (mon.0) 263 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:43:33.338 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:33 vm08 bash[23387]: audit 2026-03-10T13:43:32.557713+0000 mon.a (mon.0) 263 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:43:33.338 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:33 vm08 bash[23387]: audit 2026-03-10T13:43:32.564430+0000 mon.a (mon.0) 264 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:43:33.338 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:33 vm08 bash[23387]: audit 2026-03-10T13:43:32.564430+0000 mon.a (mon.0) 264 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:43:33.338 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:33 vm08 bash[23387]: audit 2026-03-10T13:43:32.568193+0000 mon.a (mon.0) 265 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:43:33.338 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:33 vm08 bash[23387]: audit 2026-03-10T13:43:32.568193+0000 mon.a (mon.0) 265 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:43:33.338 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:33 vm08 bash[23387]: audit 2026-03-10T13:43:32.571347+0000 mon.a (mon.0) 266 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:43:33.338 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:33 vm08 bash[23387]: audit 2026-03-10T13:43:32.571347+0000 mon.a (mon.0) 266 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:43:33.338 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:33 vm08 bash[23387]: audit 2026-03-10T13:43:32.586021+0000 mon.a (mon.0) 267 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T13:43:33.338 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:33 vm08 bash[23387]: audit 2026-03-10T13:43:32.586021+0000 mon.a (mon.0) 267 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T13:43:33.346 INFO:teuthology.orchestra.run.vm08.stderr:1+0 records in 2026-03-10T13:43:33.346 INFO:teuthology.orchestra.run.vm08.stderr:1+0 records out 2026-03-10T13:43:33.346 INFO:teuthology.orchestra.run.vm08.stderr:512 bytes copied, 0.000177863 s, 2.9 MB/s 2026-03-10T13:43:33.347 DEBUG:teuthology.orchestra.run.vm08:> ! mount | grep -v devtmpfs | grep -q /dev/vdc 2026-03-10T13:43:33.393 DEBUG:teuthology.orchestra.run.vm08:> stat /dev/vdd 2026-03-10T13:43:33.437 INFO:teuthology.orchestra.run.vm08.stdout: File: /dev/vdd 2026-03-10T13:43:33.437 INFO:teuthology.orchestra.run.vm08.stdout: Size: 0 Blocks: 0 IO Block: 4096 block special file 2026-03-10T13:43:33.437 INFO:teuthology.orchestra.run.vm08.stdout:Device: 5h/5d Inode: 26 Links: 1 Device type: fe,30 2026-03-10T13:43:33.437 INFO:teuthology.orchestra.run.vm08.stdout:Access: (0660/brw-rw----) Uid: ( 0/ root) Gid: ( 6/ disk) 2026-03-10T13:43:33.437 INFO:teuthology.orchestra.run.vm08.stdout:Access: 2026-03-10 13:36:23.656905423 +0000 2026-03-10T13:43:33.437 INFO:teuthology.orchestra.run.vm08.stdout:Modify: 2026-03-10 13:36:22.708905423 +0000 2026-03-10T13:43:33.438 INFO:teuthology.orchestra.run.vm08.stdout:Change: 2026-03-10 13:36:22.708905423 +0000 2026-03-10T13:43:33.438 INFO:teuthology.orchestra.run.vm08.stdout: Birth: - 2026-03-10T13:43:33.438 DEBUG:teuthology.orchestra.run.vm08:> sudo dd if=/dev/vdd of=/dev/null count=1 2026-03-10T13:43:33.466 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:43:33 vm00 bash[20748]: audit 2026-03-10T13:43:31.785871+0000 mgr.a (mgr.14150) 68 : audit [DBG] from='client.24109 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mgr", "placement": "2;vm00=a;vm07=b", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T13:43:33.467 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:43:33 vm00 bash[20748]: audit 2026-03-10T13:43:31.785871+0000 mgr.a (mgr.14150) 68 : audit [DBG] from='client.24109 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mgr", "placement": "2;vm00=a;vm07=b", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T13:43:33.467 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:43:33 vm00 bash[20748]: cephadm 2026-03-10T13:43:31.786624+0000 mgr.a (mgr.14150) 69 : cephadm [INF] Saving service mgr spec with placement vm00=a;vm07=b;count:2 2026-03-10T13:43:33.467 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:43:33 vm00 bash[20748]: cephadm 2026-03-10T13:43:31.786624+0000 mgr.a (mgr.14150) 69 : cephadm [INF] Saving service mgr spec with placement vm00=a;vm07=b;count:2 2026-03-10T13:43:33.467 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:43:33 vm00 bash[20748]: cephadm 2026-03-10T13:43:31.800389+0000 mgr.a (mgr.14150) 70 : cephadm [INF] Deploying daemon mgr.b on vm07 2026-03-10T13:43:33.467 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:43:33 vm00 bash[20748]: cephadm 2026-03-10T13:43:31.800389+0000 mgr.a (mgr.14150) 70 : cephadm [INF] Deploying daemon mgr.b on vm07 2026-03-10T13:43:33.467 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:43:33 vm00 bash[20748]: audit 2026-03-10T13:43:32.557713+0000 mon.a (mon.0) 263 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:43:33.467 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:43:33 vm00 bash[20748]: audit 2026-03-10T13:43:32.557713+0000 mon.a (mon.0) 263 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:43:33.467 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:43:33 vm00 bash[20748]: audit 2026-03-10T13:43:32.564430+0000 mon.a (mon.0) 264 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:43:33.467 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:43:33 vm00 bash[20748]: audit 2026-03-10T13:43:32.564430+0000 mon.a (mon.0) 264 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:43:33.467 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:43:33 vm00 bash[20748]: audit 2026-03-10T13:43:32.568193+0000 mon.a (mon.0) 265 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:43:33.467 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:43:33 vm00 bash[20748]: audit 2026-03-10T13:43:32.568193+0000 mon.a (mon.0) 265 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:43:33.467 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:43:33 vm00 bash[20748]: audit 2026-03-10T13:43:32.571347+0000 mon.a (mon.0) 266 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:43:33.467 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:43:33 vm00 bash[20748]: audit 2026-03-10T13:43:32.571347+0000 mon.a (mon.0) 266 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:43:33.467 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:43:33 vm00 bash[20748]: audit 2026-03-10T13:43:32.586021+0000 mon.a (mon.0) 267 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T13:43:33.467 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:43:33 vm00 bash[20748]: audit 2026-03-10T13:43:32.586021+0000 mon.a (mon.0) 267 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T13:43:33.484 INFO:teuthology.orchestra.run.vm08.stderr:1+0 records in 2026-03-10T13:43:33.484 INFO:teuthology.orchestra.run.vm08.stderr:1+0 records out 2026-03-10T13:43:33.484 INFO:teuthology.orchestra.run.vm08.stderr:512 bytes copied, 0.000123852 s, 4.1 MB/s 2026-03-10T13:43:33.485 DEBUG:teuthology.orchestra.run.vm08:> ! mount | grep -v devtmpfs | grep -q /dev/vdd 2026-03-10T13:43:33.530 DEBUG:teuthology.orchestra.run.vm08:> stat /dev/vde 2026-03-10T13:43:33.573 INFO:teuthology.orchestra.run.vm08.stdout: File: /dev/vde 2026-03-10T13:43:33.573 INFO:teuthology.orchestra.run.vm08.stdout: Size: 0 Blocks: 0 IO Block: 4096 block special file 2026-03-10T13:43:33.573 INFO:teuthology.orchestra.run.vm08.stdout:Device: 5h/5d Inode: 27 Links: 1 Device type: fe,40 2026-03-10T13:43:33.573 INFO:teuthology.orchestra.run.vm08.stdout:Access: (0660/brw-rw----) Uid: ( 0/ root) Gid: ( 6/ disk) 2026-03-10T13:43:33.573 INFO:teuthology.orchestra.run.vm08.stdout:Access: 2026-03-10 13:36:23.672905423 +0000 2026-03-10T13:43:33.573 INFO:teuthology.orchestra.run.vm08.stdout:Modify: 2026-03-10 13:36:22.748905423 +0000 2026-03-10T13:43:33.573 INFO:teuthology.orchestra.run.vm08.stdout:Change: 2026-03-10 13:36:22.748905423 +0000 2026-03-10T13:43:33.573 INFO:teuthology.orchestra.run.vm08.stdout: Birth: - 2026-03-10T13:43:33.573 DEBUG:teuthology.orchestra.run.vm08:> sudo dd if=/dev/vde of=/dev/null count=1 2026-03-10T13:43:33.620 INFO:teuthology.orchestra.run.vm08.stderr:1+0 records in 2026-03-10T13:43:33.620 INFO:teuthology.orchestra.run.vm08.stderr:1+0 records out 2026-03-10T13:43:33.620 INFO:teuthology.orchestra.run.vm08.stderr:512 bytes copied, 0.000113403 s, 4.5 MB/s 2026-03-10T13:43:33.621 DEBUG:teuthology.orchestra.run.vm08:> ! mount | grep -v devtmpfs | grep -q /dev/vde 2026-03-10T13:43:33.666 INFO:tasks.cephadm:Deploying osd.0 on vm00 with /dev/vde... 2026-03-10T13:43:33.666 DEBUG:teuthology.orchestra.run.vm00:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df ceph-volume -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid c9620084-1c86-11f1-bcc5-e3fb709eab0a -- lvm zap /dev/vde 2026-03-10T13:43:33.957 INFO:journalctl@ceph.mgr.b.vm07.stdout:Mar 10 13:43:33 vm07 bash[23484]: debug 2026-03-10T13:43:33.602+0000 7f3ae822a140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member 2026-03-10T13:43:33.957 INFO:journalctl@ceph.mgr.b.vm07.stdout:Mar 10 13:43:33 vm07 bash[23484]: debug 2026-03-10T13:43:33.690+0000 7f3ae822a140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member 2026-03-10T13:43:33.957 INFO:journalctl@ceph.mgr.b.vm07.stdout:Mar 10 13:43:33 vm07 bash[23484]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode. 2026-03-10T13:43:33.957 INFO:journalctl@ceph.mgr.b.vm07.stdout:Mar 10 13:43:33 vm07 bash[23484]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve. 2026-03-10T13:43:33.957 INFO:journalctl@ceph.mgr.b.vm07.stdout:Mar 10 13:43:33 vm07 bash[23484]: from numpy import show_config as show_numpy_config 2026-03-10T13:43:33.957 INFO:journalctl@ceph.mgr.b.vm07.stdout:Mar 10 13:43:33 vm07 bash[23484]: debug 2026-03-10T13:43:33.818+0000 7f3ae822a140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member 2026-03-10T13:43:34.249 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:43:34 vm07 bash[23044]: cluster 2026-03-10T13:43:32.204265+0000 mgr.a (mgr.14150) 71 : cluster [DBG] pgmap v25: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T13:43:34.249 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:43:34 vm07 bash[23044]: cluster 2026-03-10T13:43:32.204265+0000 mgr.a (mgr.14150) 71 : cluster [DBG] pgmap v25: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T13:43:34.249 INFO:journalctl@ceph.mgr.b.vm07.stdout:Mar 10 13:43:33 vm07 bash[23484]: debug 2026-03-10T13:43:33.954+0000 7f3ae822a140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member 2026-03-10T13:43:34.249 INFO:journalctl@ceph.mgr.b.vm07.stdout:Mar 10 13:43:33 vm07 bash[23484]: debug 2026-03-10T13:43:33.990+0000 7f3ae822a140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member 2026-03-10T13:43:34.249 INFO:journalctl@ceph.mgr.b.vm07.stdout:Mar 10 13:43:34 vm07 bash[23484]: debug 2026-03-10T13:43:34.026+0000 7f3ae822a140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member 2026-03-10T13:43:34.249 INFO:journalctl@ceph.mgr.b.vm07.stdout:Mar 10 13:43:34 vm07 bash[23484]: debug 2026-03-10T13:43:34.066+0000 7f3ae822a140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member 2026-03-10T13:43:34.249 INFO:journalctl@ceph.mgr.b.vm07.stdout:Mar 10 13:43:34 vm07 bash[23484]: debug 2026-03-10T13:43:34.118+0000 7f3ae822a140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member 2026-03-10T13:43:34.337 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:34 vm08 bash[23387]: cluster 2026-03-10T13:43:32.204265+0000 mgr.a (mgr.14150) 71 : cluster [DBG] pgmap v25: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T13:43:34.337 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:34 vm08 bash[23387]: cluster 2026-03-10T13:43:32.204265+0000 mgr.a (mgr.14150) 71 : cluster [DBG] pgmap v25: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T13:43:34.466 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:43:34 vm00 bash[20748]: cluster 2026-03-10T13:43:32.204265+0000 mgr.a (mgr.14150) 71 : cluster [DBG] pgmap v25: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T13:43:34.466 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:43:34 vm00 bash[20748]: cluster 2026-03-10T13:43:32.204265+0000 mgr.a (mgr.14150) 71 : cluster [DBG] pgmap v25: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T13:43:34.822 INFO:journalctl@ceph.mgr.b.vm07.stdout:Mar 10 13:43:34 vm07 bash[23484]: debug 2026-03-10T13:43:34.534+0000 7f3ae822a140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member 2026-03-10T13:43:34.822 INFO:journalctl@ceph.mgr.b.vm07.stdout:Mar 10 13:43:34 vm07 bash[23484]: debug 2026-03-10T13:43:34.570+0000 7f3ae822a140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member 2026-03-10T13:43:34.822 INFO:journalctl@ceph.mgr.b.vm07.stdout:Mar 10 13:43:34 vm07 bash[23484]: debug 2026-03-10T13:43:34.606+0000 7f3ae822a140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member 2026-03-10T13:43:34.822 INFO:journalctl@ceph.mgr.b.vm07.stdout:Mar 10 13:43:34 vm07 bash[23484]: debug 2026-03-10T13:43:34.742+0000 7f3ae822a140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member 2026-03-10T13:43:34.822 INFO:journalctl@ceph.mgr.b.vm07.stdout:Mar 10 13:43:34 vm07 bash[23484]: debug 2026-03-10T13:43:34.782+0000 7f3ae822a140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member 2026-03-10T13:43:35.076 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:43:35 vm07 bash[23044]: cluster 2026-03-10T13:43:34.204632+0000 mgr.a (mgr.14150) 72 : cluster [DBG] pgmap v26: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T13:43:35.076 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:43:35 vm07 bash[23044]: cluster 2026-03-10T13:43:34.204632+0000 mgr.a (mgr.14150) 72 : cluster [DBG] pgmap v26: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T13:43:35.076 INFO:journalctl@ceph.mgr.b.vm07.stdout:Mar 10 13:43:34 vm07 bash[23484]: debug 2026-03-10T13:43:34.818+0000 7f3ae822a140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member 2026-03-10T13:43:35.076 INFO:journalctl@ceph.mgr.b.vm07.stdout:Mar 10 13:43:34 vm07 bash[23484]: debug 2026-03-10T13:43:34.926+0000 7f3ae822a140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member 2026-03-10T13:43:35.337 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:35 vm08 bash[23387]: cluster 2026-03-10T13:43:34.204632+0000 mgr.a (mgr.14150) 72 : cluster [DBG] pgmap v26: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T13:43:35.337 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:35 vm08 bash[23387]: cluster 2026-03-10T13:43:34.204632+0000 mgr.a (mgr.14150) 72 : cluster [DBG] pgmap v26: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T13:43:35.456 INFO:journalctl@ceph.mgr.b.vm07.stdout:Mar 10 13:43:35 vm07 bash[23484]: debug 2026-03-10T13:43:35.070+0000 7f3ae822a140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member 2026-03-10T13:43:35.456 INFO:journalctl@ceph.mgr.b.vm07.stdout:Mar 10 13:43:35 vm07 bash[23484]: debug 2026-03-10T13:43:35.230+0000 7f3ae822a140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member 2026-03-10T13:43:35.456 INFO:journalctl@ceph.mgr.b.vm07.stdout:Mar 10 13:43:35 vm07 bash[23484]: debug 2026-03-10T13:43:35.266+0000 7f3ae822a140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member 2026-03-10T13:43:35.456 INFO:journalctl@ceph.mgr.b.vm07.stdout:Mar 10 13:43:35 vm07 bash[23484]: debug 2026-03-10T13:43:35.306+0000 7f3ae822a140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member 2026-03-10T13:43:35.456 INFO:journalctl@ceph.mgr.b.vm07.stdout:Mar 10 13:43:35 vm07 bash[23484]: debug 2026-03-10T13:43:35.450+0000 7f3ae822a140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member 2026-03-10T13:43:35.466 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:43:35 vm00 bash[20748]: cluster 2026-03-10T13:43:34.204632+0000 mgr.a (mgr.14150) 72 : cluster [DBG] pgmap v26: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T13:43:35.466 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:43:35 vm00 bash[20748]: cluster 2026-03-10T13:43:34.204632+0000 mgr.a (mgr.14150) 72 : cluster [DBG] pgmap v26: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T13:43:35.749 INFO:journalctl@ceph.mgr.b.vm07.stdout:Mar 10 13:43:35 vm07 bash[23484]: debug 2026-03-10T13:43:35.666+0000 7f3ae822a140 -1 mgr[py] Module snap_schedule has missing NOTIFY_TYPES member 2026-03-10T13:43:36.337 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:36 vm08 bash[23387]: audit 2026-03-10T13:43:35.673180+0000 mon.b (mon.2) 2 : audit [DBG] from='mgr.? 192.168.123.107:0/2883605798' entity='mgr.b' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/b/crt"}]: dispatch 2026-03-10T13:43:36.337 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:36 vm08 bash[23387]: audit 2026-03-10T13:43:35.673180+0000 mon.b (mon.2) 2 : audit [DBG] from='mgr.? 192.168.123.107:0/2883605798' entity='mgr.b' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/b/crt"}]: dispatch 2026-03-10T13:43:36.337 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:36 vm08 bash[23387]: audit 2026-03-10T13:43:35.673418+0000 mon.b (mon.2) 3 : audit [DBG] from='mgr.? 192.168.123.107:0/2883605798' entity='mgr.b' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/crt"}]: dispatch 2026-03-10T13:43:36.337 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:36 vm08 bash[23387]: audit 2026-03-10T13:43:35.673418+0000 mon.b (mon.2) 3 : audit [DBG] from='mgr.? 192.168.123.107:0/2883605798' entity='mgr.b' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/crt"}]: dispatch 2026-03-10T13:43:36.337 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:36 vm08 bash[23387]: cluster 2026-03-10T13:43:35.673851+0000 mon.a (mon.0) 268 : cluster [DBG] Standby manager daemon b started 2026-03-10T13:43:36.337 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:36 vm08 bash[23387]: cluster 2026-03-10T13:43:35.673851+0000 mon.a (mon.0) 268 : cluster [DBG] Standby manager daemon b started 2026-03-10T13:43:36.337 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:36 vm08 bash[23387]: audit 2026-03-10T13:43:35.673946+0000 mon.b (mon.2) 4 : audit [DBG] from='mgr.? 192.168.123.107:0/2883605798' entity='mgr.b' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/b/key"}]: dispatch 2026-03-10T13:43:36.337 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:36 vm08 bash[23387]: audit 2026-03-10T13:43:35.673946+0000 mon.b (mon.2) 4 : audit [DBG] from='mgr.? 192.168.123.107:0/2883605798' entity='mgr.b' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/b/key"}]: dispatch 2026-03-10T13:43:36.337 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:36 vm08 bash[23387]: audit 2026-03-10T13:43:35.674159+0000 mon.b (mon.2) 5 : audit [DBG] from='mgr.? 192.168.123.107:0/2883605798' entity='mgr.b' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/key"}]: dispatch 2026-03-10T13:43:36.337 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:36 vm08 bash[23387]: audit 2026-03-10T13:43:35.674159+0000 mon.b (mon.2) 5 : audit [DBG] from='mgr.? 192.168.123.107:0/2883605798' entity='mgr.b' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/key"}]: dispatch 2026-03-10T13:43:36.466 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:43:36 vm00 bash[20748]: audit 2026-03-10T13:43:35.673180+0000 mon.b (mon.2) 2 : audit [DBG] from='mgr.? 192.168.123.107:0/2883605798' entity='mgr.b' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/b/crt"}]: dispatch 2026-03-10T13:43:36.466 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:43:36 vm00 bash[20748]: audit 2026-03-10T13:43:35.673180+0000 mon.b (mon.2) 2 : audit [DBG] from='mgr.? 192.168.123.107:0/2883605798' entity='mgr.b' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/b/crt"}]: dispatch 2026-03-10T13:43:36.466 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:43:36 vm00 bash[20748]: audit 2026-03-10T13:43:35.673418+0000 mon.b (mon.2) 3 : audit [DBG] from='mgr.? 192.168.123.107:0/2883605798' entity='mgr.b' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/crt"}]: dispatch 2026-03-10T13:43:36.466 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:43:36 vm00 bash[20748]: audit 2026-03-10T13:43:35.673418+0000 mon.b (mon.2) 3 : audit [DBG] from='mgr.? 192.168.123.107:0/2883605798' entity='mgr.b' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/crt"}]: dispatch 2026-03-10T13:43:36.466 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:43:36 vm00 bash[20748]: cluster 2026-03-10T13:43:35.673851+0000 mon.a (mon.0) 268 : cluster [DBG] Standby manager daemon b started 2026-03-10T13:43:36.466 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:43:36 vm00 bash[20748]: cluster 2026-03-10T13:43:35.673851+0000 mon.a (mon.0) 268 : cluster [DBG] Standby manager daemon b started 2026-03-10T13:43:36.466 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:43:36 vm00 bash[20748]: audit 2026-03-10T13:43:35.673946+0000 mon.b (mon.2) 4 : audit [DBG] from='mgr.? 192.168.123.107:0/2883605798' entity='mgr.b' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/b/key"}]: dispatch 2026-03-10T13:43:36.466 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:43:36 vm00 bash[20748]: audit 2026-03-10T13:43:35.673946+0000 mon.b (mon.2) 4 : audit [DBG] from='mgr.? 192.168.123.107:0/2883605798' entity='mgr.b' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/b/key"}]: dispatch 2026-03-10T13:43:36.466 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:43:36 vm00 bash[20748]: audit 2026-03-10T13:43:35.674159+0000 mon.b (mon.2) 5 : audit [DBG] from='mgr.? 192.168.123.107:0/2883605798' entity='mgr.b' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/key"}]: dispatch 2026-03-10T13:43:36.466 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:43:36 vm00 bash[20748]: audit 2026-03-10T13:43:35.674159+0000 mon.b (mon.2) 5 : audit [DBG] from='mgr.? 192.168.123.107:0/2883605798' entity='mgr.b' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/key"}]: dispatch 2026-03-10T13:43:36.499 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:43:36 vm07 bash[23044]: audit 2026-03-10T13:43:35.673180+0000 mon.b (mon.2) 2 : audit [DBG] from='mgr.? 192.168.123.107:0/2883605798' entity='mgr.b' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/b/crt"}]: dispatch 2026-03-10T13:43:36.499 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:43:36 vm07 bash[23044]: audit 2026-03-10T13:43:35.673180+0000 mon.b (mon.2) 2 : audit [DBG] from='mgr.? 192.168.123.107:0/2883605798' entity='mgr.b' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/b/crt"}]: dispatch 2026-03-10T13:43:36.499 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:43:36 vm07 bash[23044]: audit 2026-03-10T13:43:35.673418+0000 mon.b (mon.2) 3 : audit [DBG] from='mgr.? 192.168.123.107:0/2883605798' entity='mgr.b' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/crt"}]: dispatch 2026-03-10T13:43:36.499 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:43:36 vm07 bash[23044]: audit 2026-03-10T13:43:35.673418+0000 mon.b (mon.2) 3 : audit [DBG] from='mgr.? 192.168.123.107:0/2883605798' entity='mgr.b' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/crt"}]: dispatch 2026-03-10T13:43:36.499 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:43:36 vm07 bash[23044]: cluster 2026-03-10T13:43:35.673851+0000 mon.a (mon.0) 268 : cluster [DBG] Standby manager daemon b started 2026-03-10T13:43:36.499 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:43:36 vm07 bash[23044]: cluster 2026-03-10T13:43:35.673851+0000 mon.a (mon.0) 268 : cluster [DBG] Standby manager daemon b started 2026-03-10T13:43:36.499 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:43:36 vm07 bash[23044]: audit 2026-03-10T13:43:35.673946+0000 mon.b (mon.2) 4 : audit [DBG] from='mgr.? 192.168.123.107:0/2883605798' entity='mgr.b' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/b/key"}]: dispatch 2026-03-10T13:43:36.499 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:43:36 vm07 bash[23044]: audit 2026-03-10T13:43:35.673946+0000 mon.b (mon.2) 4 : audit [DBG] from='mgr.? 192.168.123.107:0/2883605798' entity='mgr.b' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/b/key"}]: dispatch 2026-03-10T13:43:36.499 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:43:36 vm07 bash[23044]: audit 2026-03-10T13:43:35.674159+0000 mon.b (mon.2) 5 : audit [DBG] from='mgr.? 192.168.123.107:0/2883605798' entity='mgr.b' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/key"}]: dispatch 2026-03-10T13:43:36.499 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:43:36 vm07 bash[23044]: audit 2026-03-10T13:43:35.674159+0000 mon.b (mon.2) 5 : audit [DBG] from='mgr.? 192.168.123.107:0/2883605798' entity='mgr.b' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/key"}]: dispatch 2026-03-10T13:43:37.337 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:37 vm08 bash[23387]: cluster 2026-03-10T13:43:36.052085+0000 mon.a (mon.0) 269 : cluster [DBG] mgrmap e13: a(active, since 65s), standbys: b 2026-03-10T13:43:37.337 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:37 vm08 bash[23387]: cluster 2026-03-10T13:43:36.052085+0000 mon.a (mon.0) 269 : cluster [DBG] mgrmap e13: a(active, since 65s), standbys: b 2026-03-10T13:43:37.337 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:37 vm08 bash[23387]: audit 2026-03-10T13:43:36.052168+0000 mon.a (mon.0) 270 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "mgr metadata", "who": "b", "id": "b"}]: dispatch 2026-03-10T13:43:37.337 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:37 vm08 bash[23387]: audit 2026-03-10T13:43:36.052168+0000 mon.a (mon.0) 270 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "mgr metadata", "who": "b", "id": "b"}]: dispatch 2026-03-10T13:43:37.337 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:37 vm08 bash[23387]: cluster 2026-03-10T13:43:36.204834+0000 mgr.a (mgr.14150) 73 : cluster [DBG] pgmap v27: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T13:43:37.337 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:37 vm08 bash[23387]: cluster 2026-03-10T13:43:36.204834+0000 mgr.a (mgr.14150) 73 : cluster [DBG] pgmap v27: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T13:43:37.466 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:43:37 vm00 bash[20748]: cluster 2026-03-10T13:43:36.052085+0000 mon.a (mon.0) 269 : cluster [DBG] mgrmap e13: a(active, since 65s), standbys: b 2026-03-10T13:43:37.466 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:43:37 vm00 bash[20748]: cluster 2026-03-10T13:43:36.052085+0000 mon.a (mon.0) 269 : cluster [DBG] mgrmap e13: a(active, since 65s), standbys: b 2026-03-10T13:43:37.466 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:43:37 vm00 bash[20748]: audit 2026-03-10T13:43:36.052168+0000 mon.a (mon.0) 270 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "mgr metadata", "who": "b", "id": "b"}]: dispatch 2026-03-10T13:43:37.466 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:43:37 vm00 bash[20748]: audit 2026-03-10T13:43:36.052168+0000 mon.a (mon.0) 270 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "mgr metadata", "who": "b", "id": "b"}]: dispatch 2026-03-10T13:43:37.466 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:43:37 vm00 bash[20748]: cluster 2026-03-10T13:43:36.204834+0000 mgr.a (mgr.14150) 73 : cluster [DBG] pgmap v27: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T13:43:37.466 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:43:37 vm00 bash[20748]: cluster 2026-03-10T13:43:36.204834+0000 mgr.a (mgr.14150) 73 : cluster [DBG] pgmap v27: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T13:43:37.499 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:43:37 vm07 bash[23044]: cluster 2026-03-10T13:43:36.052085+0000 mon.a (mon.0) 269 : cluster [DBG] mgrmap e13: a(active, since 65s), standbys: b 2026-03-10T13:43:37.499 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:43:37 vm07 bash[23044]: cluster 2026-03-10T13:43:36.052085+0000 mon.a (mon.0) 269 : cluster [DBG] mgrmap e13: a(active, since 65s), standbys: b 2026-03-10T13:43:37.499 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:43:37 vm07 bash[23044]: audit 2026-03-10T13:43:36.052168+0000 mon.a (mon.0) 270 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "mgr metadata", "who": "b", "id": "b"}]: dispatch 2026-03-10T13:43:37.499 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:43:37 vm07 bash[23044]: audit 2026-03-10T13:43:36.052168+0000 mon.a (mon.0) 270 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "mgr metadata", "who": "b", "id": "b"}]: dispatch 2026-03-10T13:43:37.499 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:43:37 vm07 bash[23044]: cluster 2026-03-10T13:43:36.204834+0000 mgr.a (mgr.14150) 73 : cluster [DBG] pgmap v27: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T13:43:37.499 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:43:37 vm07 bash[23044]: cluster 2026-03-10T13:43:36.204834+0000 mgr.a (mgr.14150) 73 : cluster [DBG] pgmap v27: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T13:43:38.293 INFO:teuthology.orchestra.run.vm00.stderr:Inferring config /var/lib/ceph/c9620084-1c86-11f1-bcc5-e3fb709eab0a/mon.a/config 2026-03-10T13:43:38.716 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:43:38 vm00 bash[20748]: audit 2026-03-10T13:43:37.480057+0000 mon.a (mon.0) 271 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:43:38.716 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:43:38 vm00 bash[20748]: audit 2026-03-10T13:43:37.480057+0000 mon.a (mon.0) 271 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:43:38.716 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:43:38 vm00 bash[20748]: audit 2026-03-10T13:43:37.484258+0000 mon.a (mon.0) 272 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:43:38.716 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:43:38 vm00 bash[20748]: audit 2026-03-10T13:43:37.484258+0000 mon.a (mon.0) 272 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:43:38.717 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:43:38 vm00 bash[20748]: audit 2026-03-10T13:43:37.484958+0000 mon.a (mon.0) 273 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T13:43:38.717 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:43:38 vm00 bash[20748]: audit 2026-03-10T13:43:37.484958+0000 mon.a (mon.0) 273 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T13:43:38.717 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:43:38 vm00 bash[20748]: audit 2026-03-10T13:43:37.485550+0000 mon.a (mon.0) 274 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T13:43:38.717 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:43:38 vm00 bash[20748]: audit 2026-03-10T13:43:37.485550+0000 mon.a (mon.0) 274 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T13:43:38.717 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:43:38 vm00 bash[20748]: audit 2026-03-10T13:43:37.488870+0000 mon.a (mon.0) 275 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:43:38.717 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:43:38 vm00 bash[20748]: audit 2026-03-10T13:43:37.488870+0000 mon.a (mon.0) 275 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:43:38.717 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:43:38 vm00 bash[20748]: cephadm 2026-03-10T13:43:37.503827+0000 mgr.a (mgr.14150) 74 : cephadm [INF] Reconfiguring mgr.a (unknown last config time)... 2026-03-10T13:43:38.717 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:43:38 vm00 bash[20748]: cephadm 2026-03-10T13:43:37.503827+0000 mgr.a (mgr.14150) 74 : cephadm [INF] Reconfiguring mgr.a (unknown last config time)... 2026-03-10T13:43:38.717 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:43:38 vm00 bash[20748]: audit 2026-03-10T13:43:37.505235+0000 mon.a (mon.0) 276 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:43:38.717 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:43:38 vm00 bash[20748]: audit 2026-03-10T13:43:37.505235+0000 mon.a (mon.0) 276 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:43:38.717 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:43:38 vm00 bash[20748]: audit 2026-03-10T13:43:37.505753+0000 mon.a (mon.0) 277 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.a", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch 2026-03-10T13:43:38.717 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:43:38 vm00 bash[20748]: audit 2026-03-10T13:43:37.505753+0000 mon.a (mon.0) 277 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.a", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch 2026-03-10T13:43:38.717 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:43:38 vm00 bash[20748]: audit 2026-03-10T13:43:37.506227+0000 mon.a (mon.0) 278 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "mgr services"}]: dispatch 2026-03-10T13:43:38.717 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:43:38 vm00 bash[20748]: audit 2026-03-10T13:43:37.506227+0000 mon.a (mon.0) 278 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "mgr services"}]: dispatch 2026-03-10T13:43:38.717 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:43:38 vm00 bash[20748]: audit 2026-03-10T13:43:37.506585+0000 mon.a (mon.0) 279 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T13:43:38.717 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:43:38 vm00 bash[20748]: audit 2026-03-10T13:43:37.506585+0000 mon.a (mon.0) 279 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T13:43:38.717 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:43:38 vm00 bash[20748]: cephadm 2026-03-10T13:43:37.507068+0000 mgr.a (mgr.14150) 75 : cephadm [INF] Reconfiguring daemon mgr.a on vm00 2026-03-10T13:43:38.717 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:43:38 vm00 bash[20748]: cephadm 2026-03-10T13:43:37.507068+0000 mgr.a (mgr.14150) 75 : cephadm [INF] Reconfiguring daemon mgr.a on vm00 2026-03-10T13:43:38.717 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:43:38 vm00 bash[20748]: audit 2026-03-10T13:43:37.859226+0000 mon.a (mon.0) 280 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:43:38.717 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:43:38 vm00 bash[20748]: audit 2026-03-10T13:43:37.859226+0000 mon.a (mon.0) 280 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:43:38.717 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:43:38 vm00 bash[20748]: audit 2026-03-10T13:43:37.862913+0000 mon.a (mon.0) 281 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:43:38.717 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:43:38 vm00 bash[20748]: audit 2026-03-10T13:43:37.862913+0000 mon.a (mon.0) 281 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:43:38.717 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:43:38 vm00 bash[20748]: audit 2026-03-10T13:43:37.863875+0000 mon.a (mon.0) 282 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T13:43:38.717 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:43:38 vm00 bash[20748]: audit 2026-03-10T13:43:37.863875+0000 mon.a (mon.0) 282 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T13:43:38.717 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:43:38 vm00 bash[20748]: audit 2026-03-10T13:43:38.146631+0000 mon.a (mon.0) 283 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T13:43:38.717 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:43:38 vm00 bash[20748]: audit 2026-03-10T13:43:38.146631+0000 mon.a (mon.0) 283 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T13:43:38.717 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:43:38 vm00 bash[20748]: audit 2026-03-10T13:43:38.147084+0000 mon.a (mon.0) 284 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T13:43:38.717 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:43:38 vm00 bash[20748]: audit 2026-03-10T13:43:38.147084+0000 mon.a (mon.0) 284 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T13:43:38.717 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:43:38 vm00 bash[20748]: audit 2026-03-10T13:43:38.151313+0000 mon.a (mon.0) 285 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:43:38.717 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:43:38 vm00 bash[20748]: audit 2026-03-10T13:43:38.151313+0000 mon.a (mon.0) 285 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:43:38.749 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:43:38 vm07 bash[23044]: audit 2026-03-10T13:43:37.480057+0000 mon.a (mon.0) 271 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:43:38.749 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:43:38 vm07 bash[23044]: audit 2026-03-10T13:43:37.480057+0000 mon.a (mon.0) 271 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:43:38.749 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:43:38 vm07 bash[23044]: audit 2026-03-10T13:43:37.484258+0000 mon.a (mon.0) 272 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:43:38.749 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:43:38 vm07 bash[23044]: audit 2026-03-10T13:43:37.484258+0000 mon.a (mon.0) 272 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:43:38.749 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:43:38 vm07 bash[23044]: audit 2026-03-10T13:43:37.484958+0000 mon.a (mon.0) 273 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T13:43:38.749 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:43:38 vm07 bash[23044]: audit 2026-03-10T13:43:37.484958+0000 mon.a (mon.0) 273 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T13:43:38.749 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:43:38 vm07 bash[23044]: audit 2026-03-10T13:43:37.485550+0000 mon.a (mon.0) 274 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T13:43:38.749 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:43:38 vm07 bash[23044]: audit 2026-03-10T13:43:37.485550+0000 mon.a (mon.0) 274 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T13:43:38.749 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:43:38 vm07 bash[23044]: audit 2026-03-10T13:43:37.488870+0000 mon.a (mon.0) 275 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:43:38.749 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:43:38 vm07 bash[23044]: audit 2026-03-10T13:43:37.488870+0000 mon.a (mon.0) 275 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:43:38.749 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:43:38 vm07 bash[23044]: cephadm 2026-03-10T13:43:37.503827+0000 mgr.a (mgr.14150) 74 : cephadm [INF] Reconfiguring mgr.a (unknown last config time)... 2026-03-10T13:43:38.749 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:43:38 vm07 bash[23044]: cephadm 2026-03-10T13:43:37.503827+0000 mgr.a (mgr.14150) 74 : cephadm [INF] Reconfiguring mgr.a (unknown last config time)... 2026-03-10T13:43:38.749 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:43:38 vm07 bash[23044]: audit 2026-03-10T13:43:37.505235+0000 mon.a (mon.0) 276 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:43:38.749 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:43:38 vm07 bash[23044]: audit 2026-03-10T13:43:37.505235+0000 mon.a (mon.0) 276 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:43:38.749 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:43:38 vm07 bash[23044]: audit 2026-03-10T13:43:37.505753+0000 mon.a (mon.0) 277 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.a", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch 2026-03-10T13:43:38.749 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:43:38 vm07 bash[23044]: audit 2026-03-10T13:43:37.505753+0000 mon.a (mon.0) 277 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.a", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch 2026-03-10T13:43:38.749 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:43:38 vm07 bash[23044]: audit 2026-03-10T13:43:37.506227+0000 mon.a (mon.0) 278 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "mgr services"}]: dispatch 2026-03-10T13:43:38.749 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:43:38 vm07 bash[23044]: audit 2026-03-10T13:43:37.506227+0000 mon.a (mon.0) 278 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "mgr services"}]: dispatch 2026-03-10T13:43:38.749 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:43:38 vm07 bash[23044]: audit 2026-03-10T13:43:37.506585+0000 mon.a (mon.0) 279 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T13:43:38.749 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:43:38 vm07 bash[23044]: audit 2026-03-10T13:43:37.506585+0000 mon.a (mon.0) 279 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T13:43:38.749 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:43:38 vm07 bash[23044]: cephadm 2026-03-10T13:43:37.507068+0000 mgr.a (mgr.14150) 75 : cephadm [INF] Reconfiguring daemon mgr.a on vm00 2026-03-10T13:43:38.749 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:43:38 vm07 bash[23044]: cephadm 2026-03-10T13:43:37.507068+0000 mgr.a (mgr.14150) 75 : cephadm [INF] Reconfiguring daemon mgr.a on vm00 2026-03-10T13:43:38.749 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:43:38 vm07 bash[23044]: audit 2026-03-10T13:43:37.859226+0000 mon.a (mon.0) 280 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:43:38.750 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:43:38 vm07 bash[23044]: audit 2026-03-10T13:43:37.859226+0000 mon.a (mon.0) 280 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:43:38.750 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:43:38 vm07 bash[23044]: audit 2026-03-10T13:43:37.862913+0000 mon.a (mon.0) 281 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:43:38.750 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:43:38 vm07 bash[23044]: audit 2026-03-10T13:43:37.862913+0000 mon.a (mon.0) 281 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:43:38.750 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:43:38 vm07 bash[23044]: audit 2026-03-10T13:43:37.863875+0000 mon.a (mon.0) 282 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T13:43:38.750 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:43:38 vm07 bash[23044]: audit 2026-03-10T13:43:37.863875+0000 mon.a (mon.0) 282 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T13:43:38.750 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:43:38 vm07 bash[23044]: audit 2026-03-10T13:43:38.146631+0000 mon.a (mon.0) 283 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T13:43:38.750 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:43:38 vm07 bash[23044]: audit 2026-03-10T13:43:38.146631+0000 mon.a (mon.0) 283 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T13:43:38.750 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:43:38 vm07 bash[23044]: audit 2026-03-10T13:43:38.147084+0000 mon.a (mon.0) 284 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T13:43:38.750 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:43:38 vm07 bash[23044]: audit 2026-03-10T13:43:38.147084+0000 mon.a (mon.0) 284 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T13:43:38.750 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:43:38 vm07 bash[23044]: audit 2026-03-10T13:43:38.151313+0000 mon.a (mon.0) 285 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:43:38.750 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:43:38 vm07 bash[23044]: audit 2026-03-10T13:43:38.151313+0000 mon.a (mon.0) 285 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:43:38.838 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:38 vm08 bash[23387]: audit 2026-03-10T13:43:37.480057+0000 mon.a (mon.0) 271 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:43:38.838 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:38 vm08 bash[23387]: audit 2026-03-10T13:43:37.480057+0000 mon.a (mon.0) 271 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:43:38.838 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:38 vm08 bash[23387]: audit 2026-03-10T13:43:37.484258+0000 mon.a (mon.0) 272 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:43:38.838 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:38 vm08 bash[23387]: audit 2026-03-10T13:43:37.484258+0000 mon.a (mon.0) 272 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:43:38.838 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:38 vm08 bash[23387]: audit 2026-03-10T13:43:37.484958+0000 mon.a (mon.0) 273 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T13:43:38.838 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:38 vm08 bash[23387]: audit 2026-03-10T13:43:37.484958+0000 mon.a (mon.0) 273 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T13:43:38.838 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:38 vm08 bash[23387]: audit 2026-03-10T13:43:37.485550+0000 mon.a (mon.0) 274 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T13:43:38.838 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:38 vm08 bash[23387]: audit 2026-03-10T13:43:37.485550+0000 mon.a (mon.0) 274 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T13:43:38.838 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:38 vm08 bash[23387]: audit 2026-03-10T13:43:37.488870+0000 mon.a (mon.0) 275 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:43:38.838 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:38 vm08 bash[23387]: audit 2026-03-10T13:43:37.488870+0000 mon.a (mon.0) 275 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:43:38.838 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:38 vm08 bash[23387]: cephadm 2026-03-10T13:43:37.503827+0000 mgr.a (mgr.14150) 74 : cephadm [INF] Reconfiguring mgr.a (unknown last config time)... 2026-03-10T13:43:38.838 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:38 vm08 bash[23387]: cephadm 2026-03-10T13:43:37.503827+0000 mgr.a (mgr.14150) 74 : cephadm [INF] Reconfiguring mgr.a (unknown last config time)... 2026-03-10T13:43:38.838 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:38 vm08 bash[23387]: audit 2026-03-10T13:43:37.505235+0000 mon.a (mon.0) 276 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:43:38.838 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:38 vm08 bash[23387]: audit 2026-03-10T13:43:37.505235+0000 mon.a (mon.0) 276 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:43:38.838 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:38 vm08 bash[23387]: audit 2026-03-10T13:43:37.505753+0000 mon.a (mon.0) 277 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.a", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch 2026-03-10T13:43:38.838 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:38 vm08 bash[23387]: audit 2026-03-10T13:43:37.505753+0000 mon.a (mon.0) 277 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.a", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch 2026-03-10T13:43:38.838 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:38 vm08 bash[23387]: audit 2026-03-10T13:43:37.506227+0000 mon.a (mon.0) 278 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "mgr services"}]: dispatch 2026-03-10T13:43:38.838 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:38 vm08 bash[23387]: audit 2026-03-10T13:43:37.506227+0000 mon.a (mon.0) 278 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "mgr services"}]: dispatch 2026-03-10T13:43:38.838 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:38 vm08 bash[23387]: audit 2026-03-10T13:43:37.506585+0000 mon.a (mon.0) 279 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T13:43:38.838 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:38 vm08 bash[23387]: audit 2026-03-10T13:43:37.506585+0000 mon.a (mon.0) 279 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T13:43:38.838 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:38 vm08 bash[23387]: cephadm 2026-03-10T13:43:37.507068+0000 mgr.a (mgr.14150) 75 : cephadm [INF] Reconfiguring daemon mgr.a on vm00 2026-03-10T13:43:38.838 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:38 vm08 bash[23387]: cephadm 2026-03-10T13:43:37.507068+0000 mgr.a (mgr.14150) 75 : cephadm [INF] Reconfiguring daemon mgr.a on vm00 2026-03-10T13:43:38.838 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:38 vm08 bash[23387]: audit 2026-03-10T13:43:37.859226+0000 mon.a (mon.0) 280 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:43:38.838 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:38 vm08 bash[23387]: audit 2026-03-10T13:43:37.859226+0000 mon.a (mon.0) 280 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:43:38.838 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:38 vm08 bash[23387]: audit 2026-03-10T13:43:37.862913+0000 mon.a (mon.0) 281 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:43:38.838 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:38 vm08 bash[23387]: audit 2026-03-10T13:43:37.862913+0000 mon.a (mon.0) 281 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:43:38.838 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:38 vm08 bash[23387]: audit 2026-03-10T13:43:37.863875+0000 mon.a (mon.0) 282 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T13:43:38.838 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:38 vm08 bash[23387]: audit 2026-03-10T13:43:37.863875+0000 mon.a (mon.0) 282 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T13:43:38.838 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:38 vm08 bash[23387]: audit 2026-03-10T13:43:38.146631+0000 mon.a (mon.0) 283 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T13:43:38.838 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:38 vm08 bash[23387]: audit 2026-03-10T13:43:38.146631+0000 mon.a (mon.0) 283 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T13:43:38.838 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:38 vm08 bash[23387]: audit 2026-03-10T13:43:38.147084+0000 mon.a (mon.0) 284 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T13:43:38.838 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:38 vm08 bash[23387]: audit 2026-03-10T13:43:38.147084+0000 mon.a (mon.0) 284 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T13:43:38.838 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:38 vm08 bash[23387]: audit 2026-03-10T13:43:38.151313+0000 mon.a (mon.0) 285 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:43:38.838 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:38 vm08 bash[23387]: audit 2026-03-10T13:43:38.151313+0000 mon.a (mon.0) 285 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:43:39.101 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-10T13:43:39.114 DEBUG:teuthology.orchestra.run.vm00:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid c9620084-1c86-11f1-bcc5-e3fb709eab0a -- ceph orch daemon add osd vm00:/dev/vde 2026-03-10T13:43:39.749 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:43:39 vm07 bash[23044]: cluster 2026-03-10T13:43:38.205062+0000 mgr.a (mgr.14150) 76 : cluster [DBG] pgmap v28: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T13:43:39.749 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:43:39 vm07 bash[23044]: cluster 2026-03-10T13:43:38.205062+0000 mgr.a (mgr.14150) 76 : cluster [DBG] pgmap v28: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T13:43:39.837 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:39 vm08 bash[23387]: cluster 2026-03-10T13:43:38.205062+0000 mgr.a (mgr.14150) 76 : cluster [DBG] pgmap v28: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T13:43:39.837 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:39 vm08 bash[23387]: cluster 2026-03-10T13:43:38.205062+0000 mgr.a (mgr.14150) 76 : cluster [DBG] pgmap v28: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T13:43:39.966 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:43:39 vm00 bash[20748]: cluster 2026-03-10T13:43:38.205062+0000 mgr.a (mgr.14150) 76 : cluster [DBG] pgmap v28: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T13:43:39.966 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:43:39 vm00 bash[20748]: cluster 2026-03-10T13:43:38.205062+0000 mgr.a (mgr.14150) 76 : cluster [DBG] pgmap v28: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T13:43:41.749 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:43:41 vm07 bash[23044]: cluster 2026-03-10T13:43:40.205266+0000 mgr.a (mgr.14150) 77 : cluster [DBG] pgmap v29: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T13:43:41.749 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:43:41 vm07 bash[23044]: cluster 2026-03-10T13:43:40.205266+0000 mgr.a (mgr.14150) 77 : cluster [DBG] pgmap v29: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T13:43:41.837 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:41 vm08 bash[23387]: cluster 2026-03-10T13:43:40.205266+0000 mgr.a (mgr.14150) 77 : cluster [DBG] pgmap v29: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T13:43:41.837 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:41 vm08 bash[23387]: cluster 2026-03-10T13:43:40.205266+0000 mgr.a (mgr.14150) 77 : cluster [DBG] pgmap v29: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T13:43:41.966 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:43:41 vm00 bash[20748]: cluster 2026-03-10T13:43:40.205266+0000 mgr.a (mgr.14150) 77 : cluster [DBG] pgmap v29: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T13:43:41.966 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:43:41 vm00 bash[20748]: cluster 2026-03-10T13:43:40.205266+0000 mgr.a (mgr.14150) 77 : cluster [DBG] pgmap v29: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T13:43:43.725 INFO:teuthology.orchestra.run.vm00.stderr:Inferring config /var/lib/ceph/c9620084-1c86-11f1-bcc5-e3fb709eab0a/mon.a/config 2026-03-10T13:43:43.749 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:43:43 vm07 bash[23044]: cluster 2026-03-10T13:43:42.205471+0000 mgr.a (mgr.14150) 78 : cluster [DBG] pgmap v30: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T13:43:43.749 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:43:43 vm07 bash[23044]: cluster 2026-03-10T13:43:42.205471+0000 mgr.a (mgr.14150) 78 : cluster [DBG] pgmap v30: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T13:43:43.771 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:43:43 vm00 bash[20748]: cluster 2026-03-10T13:43:42.205471+0000 mgr.a (mgr.14150) 78 : cluster [DBG] pgmap v30: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T13:43:43.771 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:43:43 vm00 bash[20748]: cluster 2026-03-10T13:43:42.205471+0000 mgr.a (mgr.14150) 78 : cluster [DBG] pgmap v30: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T13:43:43.837 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:43 vm08 bash[23387]: cluster 2026-03-10T13:43:42.205471+0000 mgr.a (mgr.14150) 78 : cluster [DBG] pgmap v30: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T13:43:43.837 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:43 vm08 bash[23387]: cluster 2026-03-10T13:43:42.205471+0000 mgr.a (mgr.14150) 78 : cluster [DBG] pgmap v30: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T13:43:44.837 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:44 vm08 bash[23387]: audit 2026-03-10T13:43:43.966516+0000 mgr.a (mgr.14150) 79 : audit [DBG] from='client.24116 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm00:/dev/vde", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T13:43:44.837 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:44 vm08 bash[23387]: audit 2026-03-10T13:43:43.966516+0000 mgr.a (mgr.14150) 79 : audit [DBG] from='client.24116 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm00:/dev/vde", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T13:43:44.837 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:44 vm08 bash[23387]: audit 2026-03-10T13:43:43.967849+0000 mon.a (mon.0) 286 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-10T13:43:44.837 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:44 vm08 bash[23387]: audit 2026-03-10T13:43:43.967849+0000 mon.a (mon.0) 286 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-10T13:43:44.837 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:44 vm08 bash[23387]: audit 2026-03-10T13:43:43.969134+0000 mon.a (mon.0) 287 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-10T13:43:44.837 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:44 vm08 bash[23387]: audit 2026-03-10T13:43:43.969134+0000 mon.a (mon.0) 287 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-10T13:43:44.837 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:44 vm08 bash[23387]: audit 2026-03-10T13:43:43.969481+0000 mon.a (mon.0) 288 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T13:43:44.837 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:44 vm08 bash[23387]: audit 2026-03-10T13:43:43.969481+0000 mon.a (mon.0) 288 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T13:43:44.967 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:43:44 vm00 bash[20748]: audit 2026-03-10T13:43:43.966516+0000 mgr.a (mgr.14150) 79 : audit [DBG] from='client.24116 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm00:/dev/vde", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T13:43:44.967 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:43:44 vm00 bash[20748]: audit 2026-03-10T13:43:43.966516+0000 mgr.a (mgr.14150) 79 : audit [DBG] from='client.24116 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm00:/dev/vde", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T13:43:44.967 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:43:44 vm00 bash[20748]: audit 2026-03-10T13:43:43.967849+0000 mon.a (mon.0) 286 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-10T13:43:44.967 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:43:44 vm00 bash[20748]: audit 2026-03-10T13:43:43.967849+0000 mon.a (mon.0) 286 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-10T13:43:44.967 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:43:44 vm00 bash[20748]: audit 2026-03-10T13:43:43.969134+0000 mon.a (mon.0) 287 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-10T13:43:44.967 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:43:44 vm00 bash[20748]: audit 2026-03-10T13:43:43.969134+0000 mon.a (mon.0) 287 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-10T13:43:44.967 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:43:44 vm00 bash[20748]: audit 2026-03-10T13:43:43.969481+0000 mon.a (mon.0) 288 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T13:43:44.967 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:43:44 vm00 bash[20748]: audit 2026-03-10T13:43:43.969481+0000 mon.a (mon.0) 288 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T13:43:44.999 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:43:44 vm07 bash[23044]: audit 2026-03-10T13:43:43.966516+0000 mgr.a (mgr.14150) 79 : audit [DBG] from='client.24116 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm00:/dev/vde", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T13:43:44.999 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:43:44 vm07 bash[23044]: audit 2026-03-10T13:43:43.966516+0000 mgr.a (mgr.14150) 79 : audit [DBG] from='client.24116 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm00:/dev/vde", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T13:43:44.999 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:43:44 vm07 bash[23044]: audit 2026-03-10T13:43:43.967849+0000 mon.a (mon.0) 286 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-10T13:43:44.999 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:43:44 vm07 bash[23044]: audit 2026-03-10T13:43:43.967849+0000 mon.a (mon.0) 286 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-10T13:43:44.999 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:43:44 vm07 bash[23044]: audit 2026-03-10T13:43:43.969134+0000 mon.a (mon.0) 287 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-10T13:43:44.999 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:43:44 vm07 bash[23044]: audit 2026-03-10T13:43:43.969134+0000 mon.a (mon.0) 287 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-10T13:43:44.999 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:43:44 vm07 bash[23044]: audit 2026-03-10T13:43:43.969481+0000 mon.a (mon.0) 288 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T13:43:44.999 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:43:44 vm07 bash[23044]: audit 2026-03-10T13:43:43.969481+0000 mon.a (mon.0) 288 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T13:43:45.837 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:45 vm08 bash[23387]: cluster 2026-03-10T13:43:44.205685+0000 mgr.a (mgr.14150) 80 : cluster [DBG] pgmap v31: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T13:43:45.837 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:45 vm08 bash[23387]: cluster 2026-03-10T13:43:44.205685+0000 mgr.a (mgr.14150) 80 : cluster [DBG] pgmap v31: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T13:43:45.966 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:43:45 vm00 bash[20748]: cluster 2026-03-10T13:43:44.205685+0000 mgr.a (mgr.14150) 80 : cluster [DBG] pgmap v31: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T13:43:45.966 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:43:45 vm00 bash[20748]: cluster 2026-03-10T13:43:44.205685+0000 mgr.a (mgr.14150) 80 : cluster [DBG] pgmap v31: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T13:43:45.999 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:43:45 vm07 bash[23044]: cluster 2026-03-10T13:43:44.205685+0000 mgr.a (mgr.14150) 80 : cluster [DBG] pgmap v31: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T13:43:45.999 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:43:45 vm07 bash[23044]: cluster 2026-03-10T13:43:44.205685+0000 mgr.a (mgr.14150) 80 : cluster [DBG] pgmap v31: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T13:43:47.798 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:43:47 vm00 bash[20748]: cluster 2026-03-10T13:43:46.205898+0000 mgr.a (mgr.14150) 81 : cluster [DBG] pgmap v32: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T13:43:47.798 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:43:47 vm00 bash[20748]: cluster 2026-03-10T13:43:46.205898+0000 mgr.a (mgr.14150) 81 : cluster [DBG] pgmap v32: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T13:43:47.837 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:47 vm08 bash[23387]: cluster 2026-03-10T13:43:46.205898+0000 mgr.a (mgr.14150) 81 : cluster [DBG] pgmap v32: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T13:43:47.837 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:47 vm08 bash[23387]: cluster 2026-03-10T13:43:46.205898+0000 mgr.a (mgr.14150) 81 : cluster [DBG] pgmap v32: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T13:43:47.999 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:43:47 vm07 bash[23044]: cluster 2026-03-10T13:43:46.205898+0000 mgr.a (mgr.14150) 81 : cluster [DBG] pgmap v32: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T13:43:47.999 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:43:47 vm07 bash[23044]: cluster 2026-03-10T13:43:46.205898+0000 mgr.a (mgr.14150) 81 : cluster [DBG] pgmap v32: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T13:43:48.716 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:43:48 vm00 bash[20748]: audit 2026-03-10T13:43:48.350766+0000 mon.a (mon.0) 289 : audit [INF] from='client.? 192.168.123.100:0/3961049789' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "d6acd3f9-435e-414f-ba14-3aa55444aaaf"}]: dispatch 2026-03-10T13:43:48.716 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:43:48 vm00 bash[20748]: audit 2026-03-10T13:43:48.350766+0000 mon.a (mon.0) 289 : audit [INF] from='client.? 192.168.123.100:0/3961049789' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "d6acd3f9-435e-414f-ba14-3aa55444aaaf"}]: dispatch 2026-03-10T13:43:48.716 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:43:48 vm00 bash[20748]: audit 2026-03-10T13:43:48.353933+0000 mon.a (mon.0) 290 : audit [INF] from='client.? 192.168.123.100:0/3961049789' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "d6acd3f9-435e-414f-ba14-3aa55444aaaf"}]': finished 2026-03-10T13:43:48.717 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:43:48 vm00 bash[20748]: audit 2026-03-10T13:43:48.353933+0000 mon.a (mon.0) 290 : audit [INF] from='client.? 192.168.123.100:0/3961049789' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "d6acd3f9-435e-414f-ba14-3aa55444aaaf"}]': finished 2026-03-10T13:43:48.717 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:43:48 vm00 bash[20748]: cluster 2026-03-10T13:43:48.356992+0000 mon.a (mon.0) 291 : cluster [DBG] osdmap e5: 1 total, 0 up, 1 in 2026-03-10T13:43:48.717 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:43:48 vm00 bash[20748]: cluster 2026-03-10T13:43:48.356992+0000 mon.a (mon.0) 291 : cluster [DBG] osdmap e5: 1 total, 0 up, 1 in 2026-03-10T13:43:48.717 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:43:48 vm00 bash[20748]: audit 2026-03-10T13:43:48.357159+0000 mon.a (mon.0) 292 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T13:43:48.717 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:43:48 vm00 bash[20748]: audit 2026-03-10T13:43:48.357159+0000 mon.a (mon.0) 292 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T13:43:48.837 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:48 vm08 bash[23387]: audit 2026-03-10T13:43:48.350766+0000 mon.a (mon.0) 289 : audit [INF] from='client.? 192.168.123.100:0/3961049789' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "d6acd3f9-435e-414f-ba14-3aa55444aaaf"}]: dispatch 2026-03-10T13:43:48.837 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:48 vm08 bash[23387]: audit 2026-03-10T13:43:48.350766+0000 mon.a (mon.0) 289 : audit [INF] from='client.? 192.168.123.100:0/3961049789' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "d6acd3f9-435e-414f-ba14-3aa55444aaaf"}]: dispatch 2026-03-10T13:43:48.838 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:48 vm08 bash[23387]: audit 2026-03-10T13:43:48.353933+0000 mon.a (mon.0) 290 : audit [INF] from='client.? 192.168.123.100:0/3961049789' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "d6acd3f9-435e-414f-ba14-3aa55444aaaf"}]': finished 2026-03-10T13:43:48.838 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:48 vm08 bash[23387]: audit 2026-03-10T13:43:48.353933+0000 mon.a (mon.0) 290 : audit [INF] from='client.? 192.168.123.100:0/3961049789' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "d6acd3f9-435e-414f-ba14-3aa55444aaaf"}]': finished 2026-03-10T13:43:48.838 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:48 vm08 bash[23387]: cluster 2026-03-10T13:43:48.356992+0000 mon.a (mon.0) 291 : cluster [DBG] osdmap e5: 1 total, 0 up, 1 in 2026-03-10T13:43:48.838 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:48 vm08 bash[23387]: cluster 2026-03-10T13:43:48.356992+0000 mon.a (mon.0) 291 : cluster [DBG] osdmap e5: 1 total, 0 up, 1 in 2026-03-10T13:43:48.838 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:48 vm08 bash[23387]: audit 2026-03-10T13:43:48.357159+0000 mon.a (mon.0) 292 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T13:43:48.838 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:48 vm08 bash[23387]: audit 2026-03-10T13:43:48.357159+0000 mon.a (mon.0) 292 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T13:43:48.999 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:43:48 vm07 bash[23044]: audit 2026-03-10T13:43:48.350766+0000 mon.a (mon.0) 289 : audit [INF] from='client.? 192.168.123.100:0/3961049789' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "d6acd3f9-435e-414f-ba14-3aa55444aaaf"}]: dispatch 2026-03-10T13:43:48.999 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:43:48 vm07 bash[23044]: audit 2026-03-10T13:43:48.350766+0000 mon.a (mon.0) 289 : audit [INF] from='client.? 192.168.123.100:0/3961049789' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "d6acd3f9-435e-414f-ba14-3aa55444aaaf"}]: dispatch 2026-03-10T13:43:48.999 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:43:48 vm07 bash[23044]: audit 2026-03-10T13:43:48.353933+0000 mon.a (mon.0) 290 : audit [INF] from='client.? 192.168.123.100:0/3961049789' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "d6acd3f9-435e-414f-ba14-3aa55444aaaf"}]': finished 2026-03-10T13:43:48.999 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:43:48 vm07 bash[23044]: audit 2026-03-10T13:43:48.353933+0000 mon.a (mon.0) 290 : audit [INF] from='client.? 192.168.123.100:0/3961049789' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "d6acd3f9-435e-414f-ba14-3aa55444aaaf"}]': finished 2026-03-10T13:43:48.999 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:43:48 vm07 bash[23044]: cluster 2026-03-10T13:43:48.356992+0000 mon.a (mon.0) 291 : cluster [DBG] osdmap e5: 1 total, 0 up, 1 in 2026-03-10T13:43:48.999 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:43:48 vm07 bash[23044]: cluster 2026-03-10T13:43:48.356992+0000 mon.a (mon.0) 291 : cluster [DBG] osdmap e5: 1 total, 0 up, 1 in 2026-03-10T13:43:48.999 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:43:48 vm07 bash[23044]: audit 2026-03-10T13:43:48.357159+0000 mon.a (mon.0) 292 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T13:43:48.999 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:43:48 vm07 bash[23044]: audit 2026-03-10T13:43:48.357159+0000 mon.a (mon.0) 292 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T13:43:49.837 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:49 vm08 bash[23387]: cluster 2026-03-10T13:43:48.206169+0000 mgr.a (mgr.14150) 82 : cluster [DBG] pgmap v33: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T13:43:49.837 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:49 vm08 bash[23387]: cluster 2026-03-10T13:43:48.206169+0000 mgr.a (mgr.14150) 82 : cluster [DBG] pgmap v33: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T13:43:49.837 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:49 vm08 bash[23387]: audit 2026-03-10T13:43:48.933992+0000 mon.c (mon.1) 3 : audit [DBG] from='client.? 192.168.123.100:0/3549503903' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-10T13:43:49.837 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:49 vm08 bash[23387]: audit 2026-03-10T13:43:48.933992+0000 mon.c (mon.1) 3 : audit [DBG] from='client.? 192.168.123.100:0/3549503903' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-10T13:43:49.966 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:43:49 vm00 bash[20748]: cluster 2026-03-10T13:43:48.206169+0000 mgr.a (mgr.14150) 82 : cluster [DBG] pgmap v33: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T13:43:49.966 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:43:49 vm00 bash[20748]: cluster 2026-03-10T13:43:48.206169+0000 mgr.a (mgr.14150) 82 : cluster [DBG] pgmap v33: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T13:43:49.966 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:43:49 vm00 bash[20748]: audit 2026-03-10T13:43:48.933992+0000 mon.c (mon.1) 3 : audit [DBG] from='client.? 192.168.123.100:0/3549503903' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-10T13:43:49.966 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:43:49 vm00 bash[20748]: audit 2026-03-10T13:43:48.933992+0000 mon.c (mon.1) 3 : audit [DBG] from='client.? 192.168.123.100:0/3549503903' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-10T13:43:49.999 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:43:49 vm07 bash[23044]: cluster 2026-03-10T13:43:48.206169+0000 mgr.a (mgr.14150) 82 : cluster [DBG] pgmap v33: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T13:43:49.999 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:43:49 vm07 bash[23044]: cluster 2026-03-10T13:43:48.206169+0000 mgr.a (mgr.14150) 82 : cluster [DBG] pgmap v33: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T13:43:49.999 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:43:49 vm07 bash[23044]: audit 2026-03-10T13:43:48.933992+0000 mon.c (mon.1) 3 : audit [DBG] from='client.? 192.168.123.100:0/3549503903' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-10T13:43:49.999 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:43:49 vm07 bash[23044]: audit 2026-03-10T13:43:48.933992+0000 mon.c (mon.1) 3 : audit [DBG] from='client.? 192.168.123.100:0/3549503903' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-10T13:43:51.837 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:51 vm08 bash[23387]: cluster 2026-03-10T13:43:50.206406+0000 mgr.a (mgr.14150) 83 : cluster [DBG] pgmap v35: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T13:43:51.837 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:51 vm08 bash[23387]: cluster 2026-03-10T13:43:50.206406+0000 mgr.a (mgr.14150) 83 : cluster [DBG] pgmap v35: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T13:43:51.966 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:43:51 vm00 bash[20748]: cluster 2026-03-10T13:43:50.206406+0000 mgr.a (mgr.14150) 83 : cluster [DBG] pgmap v35: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T13:43:51.966 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:43:51 vm00 bash[20748]: cluster 2026-03-10T13:43:50.206406+0000 mgr.a (mgr.14150) 83 : cluster [DBG] pgmap v35: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T13:43:51.999 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:43:51 vm07 bash[23044]: cluster 2026-03-10T13:43:50.206406+0000 mgr.a (mgr.14150) 83 : cluster [DBG] pgmap v35: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T13:43:51.999 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:43:51 vm07 bash[23044]: cluster 2026-03-10T13:43:50.206406+0000 mgr.a (mgr.14150) 83 : cluster [DBG] pgmap v35: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T13:43:53.837 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:53 vm08 bash[23387]: cluster 2026-03-10T13:43:52.206610+0000 mgr.a (mgr.14150) 84 : cluster [DBG] pgmap v36: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T13:43:53.837 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:53 vm08 bash[23387]: cluster 2026-03-10T13:43:52.206610+0000 mgr.a (mgr.14150) 84 : cluster [DBG] pgmap v36: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T13:43:53.966 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:43:53 vm00 bash[20748]: cluster 2026-03-10T13:43:52.206610+0000 mgr.a (mgr.14150) 84 : cluster [DBG] pgmap v36: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T13:43:53.966 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:43:53 vm00 bash[20748]: cluster 2026-03-10T13:43:52.206610+0000 mgr.a (mgr.14150) 84 : cluster [DBG] pgmap v36: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T13:43:53.999 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:43:53 vm07 bash[23044]: cluster 2026-03-10T13:43:52.206610+0000 mgr.a (mgr.14150) 84 : cluster [DBG] pgmap v36: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T13:43:53.999 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:43:53 vm07 bash[23044]: cluster 2026-03-10T13:43:52.206610+0000 mgr.a (mgr.14150) 84 : cluster [DBG] pgmap v36: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T13:43:55.837 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:55 vm08 bash[23387]: cluster 2026-03-10T13:43:54.206850+0000 mgr.a (mgr.14150) 85 : cluster [DBG] pgmap v37: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T13:43:55.837 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:55 vm08 bash[23387]: cluster 2026-03-10T13:43:54.206850+0000 mgr.a (mgr.14150) 85 : cluster [DBG] pgmap v37: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T13:43:55.966 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:43:55 vm00 bash[20748]: cluster 2026-03-10T13:43:54.206850+0000 mgr.a (mgr.14150) 85 : cluster [DBG] pgmap v37: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T13:43:55.966 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:43:55 vm00 bash[20748]: cluster 2026-03-10T13:43:54.206850+0000 mgr.a (mgr.14150) 85 : cluster [DBG] pgmap v37: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T13:43:55.999 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:43:55 vm07 bash[23044]: cluster 2026-03-10T13:43:54.206850+0000 mgr.a (mgr.14150) 85 : cluster [DBG] pgmap v37: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T13:43:55.999 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:43:55 vm07 bash[23044]: cluster 2026-03-10T13:43:54.206850+0000 mgr.a (mgr.14150) 85 : cluster [DBG] pgmap v37: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T13:43:57.837 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:57 vm08 bash[23387]: cluster 2026-03-10T13:43:56.207058+0000 mgr.a (mgr.14150) 86 : cluster [DBG] pgmap v38: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T13:43:57.837 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:57 vm08 bash[23387]: cluster 2026-03-10T13:43:56.207058+0000 mgr.a (mgr.14150) 86 : cluster [DBG] pgmap v38: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T13:43:57.837 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:57 vm08 bash[23387]: audit 2026-03-10T13:43:57.314909+0000 mon.a (mon.0) 293 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "osd.0"}]: dispatch 2026-03-10T13:43:57.837 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:57 vm08 bash[23387]: audit 2026-03-10T13:43:57.314909+0000 mon.a (mon.0) 293 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "osd.0"}]: dispatch 2026-03-10T13:43:57.837 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:57 vm08 bash[23387]: audit 2026-03-10T13:43:57.315493+0000 mon.a (mon.0) 294 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T13:43:57.837 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:57 vm08 bash[23387]: audit 2026-03-10T13:43:57.315493+0000 mon.a (mon.0) 294 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T13:43:57.839 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:43:57 vm00 bash[20748]: cluster 2026-03-10T13:43:56.207058+0000 mgr.a (mgr.14150) 86 : cluster [DBG] pgmap v38: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T13:43:57.839 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:43:57 vm00 bash[20748]: cluster 2026-03-10T13:43:56.207058+0000 mgr.a (mgr.14150) 86 : cluster [DBG] pgmap v38: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T13:43:57.839 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:43:57 vm00 bash[20748]: audit 2026-03-10T13:43:57.314909+0000 mon.a (mon.0) 293 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "osd.0"}]: dispatch 2026-03-10T13:43:57.839 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:43:57 vm00 bash[20748]: audit 2026-03-10T13:43:57.314909+0000 mon.a (mon.0) 293 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "osd.0"}]: dispatch 2026-03-10T13:43:57.839 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:43:57 vm00 bash[20748]: audit 2026-03-10T13:43:57.315493+0000 mon.a (mon.0) 294 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T13:43:57.839 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:43:57 vm00 bash[20748]: audit 2026-03-10T13:43:57.315493+0000 mon.a (mon.0) 294 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T13:43:57.999 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:43:57 vm07 bash[23044]: cluster 2026-03-10T13:43:56.207058+0000 mgr.a (mgr.14150) 86 : cluster [DBG] pgmap v38: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T13:43:57.999 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:43:57 vm07 bash[23044]: cluster 2026-03-10T13:43:56.207058+0000 mgr.a (mgr.14150) 86 : cluster [DBG] pgmap v38: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T13:43:57.999 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:43:57 vm07 bash[23044]: audit 2026-03-10T13:43:57.314909+0000 mon.a (mon.0) 293 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "osd.0"}]: dispatch 2026-03-10T13:43:57.999 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:43:57 vm07 bash[23044]: audit 2026-03-10T13:43:57.314909+0000 mon.a (mon.0) 293 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "osd.0"}]: dispatch 2026-03-10T13:43:57.999 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:43:57 vm07 bash[23044]: audit 2026-03-10T13:43:57.315493+0000 mon.a (mon.0) 294 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T13:43:57.999 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:43:57 vm07 bash[23044]: audit 2026-03-10T13:43:57.315493+0000 mon.a (mon.0) 294 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T13:43:58.091 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:43:58 vm00 systemd[1]: /etc/systemd/system/ceph-c9620084-1c86-11f1-bcc5-e3fb709eab0a@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T13:43:58.091 INFO:journalctl@ceph.mgr.a.vm00.stdout:Mar 10 13:43:58 vm00 systemd[1]: /etc/systemd/system/ceph-c9620084-1c86-11f1-bcc5-e3fb709eab0a@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T13:43:58.379 INFO:journalctl@ceph.mgr.a.vm00.stdout:Mar 10 13:43:58 vm00 systemd[1]: /etc/systemd/system/ceph-c9620084-1c86-11f1-bcc5-e3fb709eab0a@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T13:43:58.380 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:43:58 vm00 systemd[1]: /etc/systemd/system/ceph-c9620084-1c86-11f1-bcc5-e3fb709eab0a@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T13:43:58.719 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:43:58 vm00 bash[20748]: cephadm 2026-03-10T13:43:57.315916+0000 mgr.a (mgr.14150) 87 : cephadm [INF] Deploying daemon osd.0 on vm00 2026-03-10T13:43:58.719 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:43:58 vm00 bash[20748]: cephadm 2026-03-10T13:43:57.315916+0000 mgr.a (mgr.14150) 87 : cephadm [INF] Deploying daemon osd.0 on vm00 2026-03-10T13:43:58.720 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:43:58 vm00 bash[20748]: audit 2026-03-10T13:43:58.291872+0000 mon.a (mon.0) 295 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T13:43:58.720 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:43:58 vm00 bash[20748]: audit 2026-03-10T13:43:58.291872+0000 mon.a (mon.0) 295 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T13:43:58.720 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:43:58 vm00 bash[20748]: audit 2026-03-10T13:43:58.297484+0000 mon.a (mon.0) 296 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:43:58.720 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:43:58 vm00 bash[20748]: audit 2026-03-10T13:43:58.297484+0000 mon.a (mon.0) 296 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:43:58.720 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:43:58 vm00 bash[20748]: audit 2026-03-10T13:43:58.302598+0000 mon.a (mon.0) 297 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:43:58.720 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:43:58 vm00 bash[20748]: audit 2026-03-10T13:43:58.302598+0000 mon.a (mon.0) 297 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:43:58.837 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:58 vm08 bash[23387]: cephadm 2026-03-10T13:43:57.315916+0000 mgr.a (mgr.14150) 87 : cephadm [INF] Deploying daemon osd.0 on vm00 2026-03-10T13:43:58.837 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:58 vm08 bash[23387]: cephadm 2026-03-10T13:43:57.315916+0000 mgr.a (mgr.14150) 87 : cephadm [INF] Deploying daemon osd.0 on vm00 2026-03-10T13:43:58.837 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:58 vm08 bash[23387]: audit 2026-03-10T13:43:58.291872+0000 mon.a (mon.0) 295 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T13:43:58.837 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:58 vm08 bash[23387]: audit 2026-03-10T13:43:58.291872+0000 mon.a (mon.0) 295 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T13:43:58.837 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:58 vm08 bash[23387]: audit 2026-03-10T13:43:58.297484+0000 mon.a (mon.0) 296 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:43:58.837 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:58 vm08 bash[23387]: audit 2026-03-10T13:43:58.297484+0000 mon.a (mon.0) 296 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:43:58.838 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:58 vm08 bash[23387]: audit 2026-03-10T13:43:58.302598+0000 mon.a (mon.0) 297 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:43:58.838 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:58 vm08 bash[23387]: audit 2026-03-10T13:43:58.302598+0000 mon.a (mon.0) 297 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:43:58.999 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:43:58 vm07 bash[23044]: cephadm 2026-03-10T13:43:57.315916+0000 mgr.a (mgr.14150) 87 : cephadm [INF] Deploying daemon osd.0 on vm00 2026-03-10T13:43:58.999 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:43:58 vm07 bash[23044]: cephadm 2026-03-10T13:43:57.315916+0000 mgr.a (mgr.14150) 87 : cephadm [INF] Deploying daemon osd.0 on vm00 2026-03-10T13:43:58.999 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:43:58 vm07 bash[23044]: audit 2026-03-10T13:43:58.291872+0000 mon.a (mon.0) 295 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T13:43:58.999 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:43:58 vm07 bash[23044]: audit 2026-03-10T13:43:58.291872+0000 mon.a (mon.0) 295 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T13:43:58.999 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:43:58 vm07 bash[23044]: audit 2026-03-10T13:43:58.297484+0000 mon.a (mon.0) 296 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:43:58.999 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:43:58 vm07 bash[23044]: audit 2026-03-10T13:43:58.297484+0000 mon.a (mon.0) 296 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:43:58.999 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:43:58 vm07 bash[23044]: audit 2026-03-10T13:43:58.302598+0000 mon.a (mon.0) 297 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:43:58.999 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:43:58 vm07 bash[23044]: audit 2026-03-10T13:43:58.302598+0000 mon.a (mon.0) 297 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:43:59.716 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:43:59 vm00 bash[20748]: cluster 2026-03-10T13:43:58.207408+0000 mgr.a (mgr.14150) 88 : cluster [DBG] pgmap v39: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T13:43:59.716 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:43:59 vm00 bash[20748]: cluster 2026-03-10T13:43:58.207408+0000 mgr.a (mgr.14150) 88 : cluster [DBG] pgmap v39: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T13:43:59.837 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:59 vm08 bash[23387]: cluster 2026-03-10T13:43:58.207408+0000 mgr.a (mgr.14150) 88 : cluster [DBG] pgmap v39: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T13:43:59.837 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:43:59 vm08 bash[23387]: cluster 2026-03-10T13:43:58.207408+0000 mgr.a (mgr.14150) 88 : cluster [DBG] pgmap v39: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T13:43:59.999 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:43:59 vm07 bash[23044]: cluster 2026-03-10T13:43:58.207408+0000 mgr.a (mgr.14150) 88 : cluster [DBG] pgmap v39: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T13:43:59.999 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:43:59 vm07 bash[23044]: cluster 2026-03-10T13:43:58.207408+0000 mgr.a (mgr.14150) 88 : cluster [DBG] pgmap v39: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T13:44:01.837 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:44:01 vm08 bash[23387]: cluster 2026-03-10T13:44:00.208419+0000 mgr.a (mgr.14150) 89 : cluster [DBG] pgmap v40: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T13:44:01.837 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:44:01 vm08 bash[23387]: cluster 2026-03-10T13:44:00.208419+0000 mgr.a (mgr.14150) 89 : cluster [DBG] pgmap v40: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T13:44:01.966 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:44:01 vm00 bash[20748]: cluster 2026-03-10T13:44:00.208419+0000 mgr.a (mgr.14150) 89 : cluster [DBG] pgmap v40: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T13:44:01.966 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:44:01 vm00 bash[20748]: cluster 2026-03-10T13:44:00.208419+0000 mgr.a (mgr.14150) 89 : cluster [DBG] pgmap v40: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T13:44:01.999 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:44:01 vm07 bash[23044]: cluster 2026-03-10T13:44:00.208419+0000 mgr.a (mgr.14150) 89 : cluster [DBG] pgmap v40: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T13:44:01.999 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:44:01 vm07 bash[23044]: cluster 2026-03-10T13:44:00.208419+0000 mgr.a (mgr.14150) 89 : cluster [DBG] pgmap v40: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T13:44:02.837 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:44:02 vm08 bash[23387]: audit 2026-03-10T13:44:01.578298+0000 mon.a (mon.0) 298 : audit [INF] from='osd.0 [v2:192.168.123.100:6802/430820835,v1:192.168.123.100:6803/430820835]' entity='osd.0' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]: dispatch 2026-03-10T13:44:02.837 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:44:02 vm08 bash[23387]: audit 2026-03-10T13:44:01.578298+0000 mon.a (mon.0) 298 : audit [INF] from='osd.0 [v2:192.168.123.100:6802/430820835,v1:192.168.123.100:6803/430820835]' entity='osd.0' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]: dispatch 2026-03-10T13:44:02.966 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:44:02 vm00 bash[20748]: audit 2026-03-10T13:44:01.578298+0000 mon.a (mon.0) 298 : audit [INF] from='osd.0 [v2:192.168.123.100:6802/430820835,v1:192.168.123.100:6803/430820835]' entity='osd.0' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]: dispatch 2026-03-10T13:44:02.966 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:44:02 vm00 bash[20748]: audit 2026-03-10T13:44:01.578298+0000 mon.a (mon.0) 298 : audit [INF] from='osd.0 [v2:192.168.123.100:6802/430820835,v1:192.168.123.100:6803/430820835]' entity='osd.0' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]: dispatch 2026-03-10T13:44:02.999 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:44:02 vm07 bash[23044]: audit 2026-03-10T13:44:01.578298+0000 mon.a (mon.0) 298 : audit [INF] from='osd.0 [v2:192.168.123.100:6802/430820835,v1:192.168.123.100:6803/430820835]' entity='osd.0' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]: dispatch 2026-03-10T13:44:02.999 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:44:02 vm07 bash[23044]: audit 2026-03-10T13:44:01.578298+0000 mon.a (mon.0) 298 : audit [INF] from='osd.0 [v2:192.168.123.100:6802/430820835,v1:192.168.123.100:6803/430820835]' entity='osd.0' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]: dispatch 2026-03-10T13:44:03.837 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:44:03 vm08 bash[23387]: cluster 2026-03-10T13:44:02.208652+0000 mgr.a (mgr.14150) 90 : cluster [DBG] pgmap v41: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T13:44:03.838 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:44:03 vm08 bash[23387]: cluster 2026-03-10T13:44:02.208652+0000 mgr.a (mgr.14150) 90 : cluster [DBG] pgmap v41: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T13:44:03.838 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:44:03 vm08 bash[23387]: audit 2026-03-10T13:44:02.565732+0000 mon.a (mon.0) 299 : audit [INF] from='osd.0 [v2:192.168.123.100:6802/430820835,v1:192.168.123.100:6803/430820835]' entity='osd.0' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]': finished 2026-03-10T13:44:03.838 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:44:03 vm08 bash[23387]: audit 2026-03-10T13:44:02.565732+0000 mon.a (mon.0) 299 : audit [INF] from='osd.0 [v2:192.168.123.100:6802/430820835,v1:192.168.123.100:6803/430820835]' entity='osd.0' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]': finished 2026-03-10T13:44:03.838 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:44:03 vm08 bash[23387]: cluster 2026-03-10T13:44:02.566993+0000 mon.a (mon.0) 300 : cluster [DBG] osdmap e6: 1 total, 0 up, 1 in 2026-03-10T13:44:03.838 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:44:03 vm08 bash[23387]: cluster 2026-03-10T13:44:02.566993+0000 mon.a (mon.0) 300 : cluster [DBG] osdmap e6: 1 total, 0 up, 1 in 2026-03-10T13:44:03.838 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:44:03 vm08 bash[23387]: audit 2026-03-10T13:44:02.567148+0000 mon.a (mon.0) 301 : audit [INF] from='osd.0 [v2:192.168.123.100:6802/430820835,v1:192.168.123.100:6803/430820835]' entity='osd.0' cmd=[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=vm00", "root=default"]}]: dispatch 2026-03-10T13:44:03.838 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:44:03 vm08 bash[23387]: audit 2026-03-10T13:44:02.567148+0000 mon.a (mon.0) 301 : audit [INF] from='osd.0 [v2:192.168.123.100:6802/430820835,v1:192.168.123.100:6803/430820835]' entity='osd.0' cmd=[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=vm00", "root=default"]}]: dispatch 2026-03-10T13:44:03.838 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:44:03 vm08 bash[23387]: audit 2026-03-10T13:44:02.567521+0000 mon.a (mon.0) 302 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T13:44:03.838 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:44:03 vm08 bash[23387]: audit 2026-03-10T13:44:02.567521+0000 mon.a (mon.0) 302 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T13:44:03.966 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:44:03 vm00 bash[20748]: cluster 2026-03-10T13:44:02.208652+0000 mgr.a (mgr.14150) 90 : cluster [DBG] pgmap v41: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T13:44:03.966 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:44:03 vm00 bash[20748]: cluster 2026-03-10T13:44:02.208652+0000 mgr.a (mgr.14150) 90 : cluster [DBG] pgmap v41: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T13:44:03.966 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:44:03 vm00 bash[20748]: audit 2026-03-10T13:44:02.565732+0000 mon.a (mon.0) 299 : audit [INF] from='osd.0 [v2:192.168.123.100:6802/430820835,v1:192.168.123.100:6803/430820835]' entity='osd.0' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]': finished 2026-03-10T13:44:03.966 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:44:03 vm00 bash[20748]: audit 2026-03-10T13:44:02.565732+0000 mon.a (mon.0) 299 : audit [INF] from='osd.0 [v2:192.168.123.100:6802/430820835,v1:192.168.123.100:6803/430820835]' entity='osd.0' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]': finished 2026-03-10T13:44:03.966 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:44:03 vm00 bash[20748]: cluster 2026-03-10T13:44:02.566993+0000 mon.a (mon.0) 300 : cluster [DBG] osdmap e6: 1 total, 0 up, 1 in 2026-03-10T13:44:03.966 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:44:03 vm00 bash[20748]: cluster 2026-03-10T13:44:02.566993+0000 mon.a (mon.0) 300 : cluster [DBG] osdmap e6: 1 total, 0 up, 1 in 2026-03-10T13:44:03.966 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:44:03 vm00 bash[20748]: audit 2026-03-10T13:44:02.567148+0000 mon.a (mon.0) 301 : audit [INF] from='osd.0 [v2:192.168.123.100:6802/430820835,v1:192.168.123.100:6803/430820835]' entity='osd.0' cmd=[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=vm00", "root=default"]}]: dispatch 2026-03-10T13:44:03.966 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:44:03 vm00 bash[20748]: audit 2026-03-10T13:44:02.567148+0000 mon.a (mon.0) 301 : audit [INF] from='osd.0 [v2:192.168.123.100:6802/430820835,v1:192.168.123.100:6803/430820835]' entity='osd.0' cmd=[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=vm00", "root=default"]}]: dispatch 2026-03-10T13:44:03.966 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:44:03 vm00 bash[20748]: audit 2026-03-10T13:44:02.567521+0000 mon.a (mon.0) 302 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T13:44:03.966 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:44:03 vm00 bash[20748]: audit 2026-03-10T13:44:02.567521+0000 mon.a (mon.0) 302 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T13:44:03.999 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:44:03 vm07 bash[23044]: cluster 2026-03-10T13:44:02.208652+0000 mgr.a (mgr.14150) 90 : cluster [DBG] pgmap v41: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T13:44:03.999 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:44:03 vm07 bash[23044]: cluster 2026-03-10T13:44:02.208652+0000 mgr.a (mgr.14150) 90 : cluster [DBG] pgmap v41: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T13:44:03.999 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:44:03 vm07 bash[23044]: audit 2026-03-10T13:44:02.565732+0000 mon.a (mon.0) 299 : audit [INF] from='osd.0 [v2:192.168.123.100:6802/430820835,v1:192.168.123.100:6803/430820835]' entity='osd.0' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]': finished 2026-03-10T13:44:03.999 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:44:03 vm07 bash[23044]: audit 2026-03-10T13:44:02.565732+0000 mon.a (mon.0) 299 : audit [INF] from='osd.0 [v2:192.168.123.100:6802/430820835,v1:192.168.123.100:6803/430820835]' entity='osd.0' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]': finished 2026-03-10T13:44:03.999 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:44:03 vm07 bash[23044]: cluster 2026-03-10T13:44:02.566993+0000 mon.a (mon.0) 300 : cluster [DBG] osdmap e6: 1 total, 0 up, 1 in 2026-03-10T13:44:03.999 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:44:03 vm07 bash[23044]: cluster 2026-03-10T13:44:02.566993+0000 mon.a (mon.0) 300 : cluster [DBG] osdmap e6: 1 total, 0 up, 1 in 2026-03-10T13:44:03.999 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:44:03 vm07 bash[23044]: audit 2026-03-10T13:44:02.567148+0000 mon.a (mon.0) 301 : audit [INF] from='osd.0 [v2:192.168.123.100:6802/430820835,v1:192.168.123.100:6803/430820835]' entity='osd.0' cmd=[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=vm00", "root=default"]}]: dispatch 2026-03-10T13:44:03.999 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:44:03 vm07 bash[23044]: audit 2026-03-10T13:44:02.567148+0000 mon.a (mon.0) 301 : audit [INF] from='osd.0 [v2:192.168.123.100:6802/430820835,v1:192.168.123.100:6803/430820835]' entity='osd.0' cmd=[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=vm00", "root=default"]}]: dispatch 2026-03-10T13:44:03.999 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:44:03 vm07 bash[23044]: audit 2026-03-10T13:44:02.567521+0000 mon.a (mon.0) 302 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T13:44:03.999 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:44:03 vm07 bash[23044]: audit 2026-03-10T13:44:02.567521+0000 mon.a (mon.0) 302 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T13:44:04.837 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:44:04 vm08 bash[23387]: cluster 2026-03-10T13:44:02.624221+0000 osd.0 (osd.0) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-10T13:44:04.837 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:44:04 vm08 bash[23387]: cluster 2026-03-10T13:44:02.624221+0000 osd.0 (osd.0) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-10T13:44:04.837 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:44:04 vm08 bash[23387]: cluster 2026-03-10T13:44:02.624279+0000 osd.0 (osd.0) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-10T13:44:04.837 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:44:04 vm08 bash[23387]: cluster 2026-03-10T13:44:02.624279+0000 osd.0 (osd.0) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-10T13:44:04.837 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:44:04 vm08 bash[23387]: audit 2026-03-10T13:44:03.573288+0000 mon.a (mon.0) 303 : audit [INF] from='osd.0 [v2:192.168.123.100:6802/430820835,v1:192.168.123.100:6803/430820835]' entity='osd.0' cmd='[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=vm00", "root=default"]}]': finished 2026-03-10T13:44:04.837 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:44:04 vm08 bash[23387]: audit 2026-03-10T13:44:03.573288+0000 mon.a (mon.0) 303 : audit [INF] from='osd.0 [v2:192.168.123.100:6802/430820835,v1:192.168.123.100:6803/430820835]' entity='osd.0' cmd='[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=vm00", "root=default"]}]': finished 2026-03-10T13:44:04.837 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:44:04 vm08 bash[23387]: cluster 2026-03-10T13:44:03.574885+0000 mon.a (mon.0) 304 : cluster [DBG] osdmap e7: 1 total, 0 up, 1 in 2026-03-10T13:44:04.837 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:44:04 vm08 bash[23387]: cluster 2026-03-10T13:44:03.574885+0000 mon.a (mon.0) 304 : cluster [DBG] osdmap e7: 1 total, 0 up, 1 in 2026-03-10T13:44:04.837 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:44:04 vm08 bash[23387]: audit 2026-03-10T13:44:03.575746+0000 mon.a (mon.0) 305 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T13:44:04.837 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:44:04 vm08 bash[23387]: audit 2026-03-10T13:44:03.575746+0000 mon.a (mon.0) 305 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T13:44:04.837 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:44:04 vm08 bash[23387]: audit 2026-03-10T13:44:03.580095+0000 mon.a (mon.0) 306 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T13:44:04.837 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:44:04 vm08 bash[23387]: audit 2026-03-10T13:44:03.580095+0000 mon.a (mon.0) 306 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T13:44:04.837 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:44:04 vm08 bash[23387]: audit 2026-03-10T13:44:04.375050+0000 mon.a (mon.0) 307 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:44:04.838 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:44:04 vm08 bash[23387]: audit 2026-03-10T13:44:04.375050+0000 mon.a (mon.0) 307 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:44:04.838 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:44:04 vm08 bash[23387]: audit 2026-03-10T13:44:04.379127+0000 mon.a (mon.0) 308 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:44:04.838 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:44:04 vm08 bash[23387]: audit 2026-03-10T13:44:04.379127+0000 mon.a (mon.0) 308 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:44:04.838 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:44:04 vm08 bash[23387]: audit 2026-03-10T13:44:04.379759+0000 mon.a (mon.0) 309 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T13:44:04.838 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:44:04 vm08 bash[23387]: audit 2026-03-10T13:44:04.379759+0000 mon.a (mon.0) 309 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T13:44:04.838 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:44:04 vm08 bash[23387]: audit 2026-03-10T13:44:04.380227+0000 mon.a (mon.0) 310 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T13:44:04.838 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:44:04 vm08 bash[23387]: audit 2026-03-10T13:44:04.380227+0000 mon.a (mon.0) 310 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T13:44:04.838 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:44:04 vm08 bash[23387]: audit 2026-03-10T13:44:04.383162+0000 mon.a (mon.0) 311 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:44:04.838 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:44:04 vm08 bash[23387]: audit 2026-03-10T13:44:04.383162+0000 mon.a (mon.0) 311 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:44:04.966 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:44:04 vm00 bash[20748]: cluster 2026-03-10T13:44:02.624221+0000 osd.0 (osd.0) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-10T13:44:04.966 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:44:04 vm00 bash[20748]: cluster 2026-03-10T13:44:02.624221+0000 osd.0 (osd.0) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-10T13:44:04.966 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:44:04 vm00 bash[20748]: cluster 2026-03-10T13:44:02.624279+0000 osd.0 (osd.0) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-10T13:44:04.966 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:44:04 vm00 bash[20748]: cluster 2026-03-10T13:44:02.624279+0000 osd.0 (osd.0) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-10T13:44:04.966 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:44:04 vm00 bash[20748]: audit 2026-03-10T13:44:03.573288+0000 mon.a (mon.0) 303 : audit [INF] from='osd.0 [v2:192.168.123.100:6802/430820835,v1:192.168.123.100:6803/430820835]' entity='osd.0' cmd='[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=vm00", "root=default"]}]': finished 2026-03-10T13:44:04.966 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:44:04 vm00 bash[20748]: audit 2026-03-10T13:44:03.573288+0000 mon.a (mon.0) 303 : audit [INF] from='osd.0 [v2:192.168.123.100:6802/430820835,v1:192.168.123.100:6803/430820835]' entity='osd.0' cmd='[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=vm00", "root=default"]}]': finished 2026-03-10T13:44:04.966 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:44:04 vm00 bash[20748]: cluster 2026-03-10T13:44:03.574885+0000 mon.a (mon.0) 304 : cluster [DBG] osdmap e7: 1 total, 0 up, 1 in 2026-03-10T13:44:04.966 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:44:04 vm00 bash[20748]: cluster 2026-03-10T13:44:03.574885+0000 mon.a (mon.0) 304 : cluster [DBG] osdmap e7: 1 total, 0 up, 1 in 2026-03-10T13:44:04.966 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:44:04 vm00 bash[20748]: audit 2026-03-10T13:44:03.575746+0000 mon.a (mon.0) 305 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T13:44:04.966 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:44:04 vm00 bash[20748]: audit 2026-03-10T13:44:03.575746+0000 mon.a (mon.0) 305 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T13:44:04.966 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:44:04 vm00 bash[20748]: audit 2026-03-10T13:44:03.580095+0000 mon.a (mon.0) 306 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T13:44:04.966 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:44:04 vm00 bash[20748]: audit 2026-03-10T13:44:03.580095+0000 mon.a (mon.0) 306 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T13:44:04.966 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:44:04 vm00 bash[20748]: audit 2026-03-10T13:44:04.375050+0000 mon.a (mon.0) 307 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:44:04.966 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:44:04 vm00 bash[20748]: audit 2026-03-10T13:44:04.375050+0000 mon.a (mon.0) 307 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:44:04.967 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:44:04 vm00 bash[20748]: audit 2026-03-10T13:44:04.379127+0000 mon.a (mon.0) 308 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:44:04.967 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:44:04 vm00 bash[20748]: audit 2026-03-10T13:44:04.379127+0000 mon.a (mon.0) 308 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:44:04.967 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:44:04 vm00 bash[20748]: audit 2026-03-10T13:44:04.379759+0000 mon.a (mon.0) 309 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T13:44:04.967 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:44:04 vm00 bash[20748]: audit 2026-03-10T13:44:04.379759+0000 mon.a (mon.0) 309 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T13:44:04.967 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:44:04 vm00 bash[20748]: audit 2026-03-10T13:44:04.380227+0000 mon.a (mon.0) 310 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T13:44:04.967 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:44:04 vm00 bash[20748]: audit 2026-03-10T13:44:04.380227+0000 mon.a (mon.0) 310 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T13:44:04.967 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:44:04 vm00 bash[20748]: audit 2026-03-10T13:44:04.383162+0000 mon.a (mon.0) 311 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:44:04.967 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:44:04 vm00 bash[20748]: audit 2026-03-10T13:44:04.383162+0000 mon.a (mon.0) 311 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:44:04.999 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:44:04 vm07 bash[23044]: cluster 2026-03-10T13:44:02.624221+0000 osd.0 (osd.0) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-10T13:44:04.999 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:44:04 vm07 bash[23044]: cluster 2026-03-10T13:44:02.624221+0000 osd.0 (osd.0) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-10T13:44:04.999 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:44:04 vm07 bash[23044]: cluster 2026-03-10T13:44:02.624279+0000 osd.0 (osd.0) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-10T13:44:04.999 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:44:04 vm07 bash[23044]: cluster 2026-03-10T13:44:02.624279+0000 osd.0 (osd.0) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-10T13:44:04.999 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:44:04 vm07 bash[23044]: audit 2026-03-10T13:44:03.573288+0000 mon.a (mon.0) 303 : audit [INF] from='osd.0 [v2:192.168.123.100:6802/430820835,v1:192.168.123.100:6803/430820835]' entity='osd.0' cmd='[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=vm00", "root=default"]}]': finished 2026-03-10T13:44:04.999 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:44:04 vm07 bash[23044]: audit 2026-03-10T13:44:03.573288+0000 mon.a (mon.0) 303 : audit [INF] from='osd.0 [v2:192.168.123.100:6802/430820835,v1:192.168.123.100:6803/430820835]' entity='osd.0' cmd='[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=vm00", "root=default"]}]': finished 2026-03-10T13:44:04.999 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:44:04 vm07 bash[23044]: cluster 2026-03-10T13:44:03.574885+0000 mon.a (mon.0) 304 : cluster [DBG] osdmap e7: 1 total, 0 up, 1 in 2026-03-10T13:44:04.999 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:44:04 vm07 bash[23044]: cluster 2026-03-10T13:44:03.574885+0000 mon.a (mon.0) 304 : cluster [DBG] osdmap e7: 1 total, 0 up, 1 in 2026-03-10T13:44:04.999 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:44:04 vm07 bash[23044]: audit 2026-03-10T13:44:03.575746+0000 mon.a (mon.0) 305 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T13:44:04.999 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:44:04 vm07 bash[23044]: audit 2026-03-10T13:44:03.575746+0000 mon.a (mon.0) 305 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T13:44:04.999 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:44:04 vm07 bash[23044]: audit 2026-03-10T13:44:03.580095+0000 mon.a (mon.0) 306 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T13:44:04.999 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:44:04 vm07 bash[23044]: audit 2026-03-10T13:44:03.580095+0000 mon.a (mon.0) 306 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T13:44:04.999 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:44:04 vm07 bash[23044]: audit 2026-03-10T13:44:04.375050+0000 mon.a (mon.0) 307 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:44:04.999 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:44:04 vm07 bash[23044]: audit 2026-03-10T13:44:04.375050+0000 mon.a (mon.0) 307 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:44:04.999 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:44:04 vm07 bash[23044]: audit 2026-03-10T13:44:04.379127+0000 mon.a (mon.0) 308 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:44:04.999 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:44:04 vm07 bash[23044]: audit 2026-03-10T13:44:04.379127+0000 mon.a (mon.0) 308 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:44:04.999 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:44:04 vm07 bash[23044]: audit 2026-03-10T13:44:04.379759+0000 mon.a (mon.0) 309 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T13:44:04.999 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:44:04 vm07 bash[23044]: audit 2026-03-10T13:44:04.379759+0000 mon.a (mon.0) 309 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T13:44:04.999 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:44:04 vm07 bash[23044]: audit 2026-03-10T13:44:04.380227+0000 mon.a (mon.0) 310 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T13:44:04.999 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:44:04 vm07 bash[23044]: audit 2026-03-10T13:44:04.380227+0000 mon.a (mon.0) 310 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T13:44:04.999 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:44:04 vm07 bash[23044]: audit 2026-03-10T13:44:04.383162+0000 mon.a (mon.0) 311 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:44:04.999 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:44:04 vm07 bash[23044]: audit 2026-03-10T13:44:04.383162+0000 mon.a (mon.0) 311 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:44:05.267 INFO:teuthology.orchestra.run.vm00.stdout:Created osd(s) 0 on host 'vm00' 2026-03-10T13:44:05.341 DEBUG:teuthology.orchestra.run.vm00:osd.0> sudo journalctl -f -n 0 -u ceph-c9620084-1c86-11f1-bcc5-e3fb709eab0a@osd.0.service 2026-03-10T13:44:05.342 INFO:tasks.cephadm:Deploying osd.1 on vm07 with /dev/vde... 2026-03-10T13:44:05.342 DEBUG:teuthology.orchestra.run.vm07:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df ceph-volume -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid c9620084-1c86-11f1-bcc5-e3fb709eab0a -- lvm zap /dev/vde 2026-03-10T13:44:05.716 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:44:05 vm00 bash[20748]: cluster 2026-03-10T13:44:04.208844+0000 mgr.a (mgr.14150) 91 : cluster [DBG] pgmap v44: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T13:44:05.716 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:44:05 vm00 bash[20748]: cluster 2026-03-10T13:44:04.208844+0000 mgr.a (mgr.14150) 91 : cluster [DBG] pgmap v44: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T13:44:05.716 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:44:05 vm00 bash[20748]: audit 2026-03-10T13:44:04.577704+0000 mon.a (mon.0) 312 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T13:44:05.716 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:44:05 vm00 bash[20748]: audit 2026-03-10T13:44:04.577704+0000 mon.a (mon.0) 312 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T13:44:05.716 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:44:05 vm00 bash[20748]: cluster 2026-03-10T13:44:04.582282+0000 mon.a (mon.0) 313 : cluster [INF] osd.0 [v2:192.168.123.100:6802/430820835,v1:192.168.123.100:6803/430820835] boot 2026-03-10T13:44:05.716 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:44:05 vm00 bash[20748]: cluster 2026-03-10T13:44:04.582282+0000 mon.a (mon.0) 313 : cluster [INF] osd.0 [v2:192.168.123.100:6802/430820835,v1:192.168.123.100:6803/430820835] boot 2026-03-10T13:44:05.716 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:44:05 vm00 bash[20748]: cluster 2026-03-10T13:44:04.582322+0000 mon.a (mon.0) 314 : cluster [DBG] osdmap e8: 1 total, 1 up, 1 in 2026-03-10T13:44:05.717 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:44:05 vm00 bash[20748]: cluster 2026-03-10T13:44:04.582322+0000 mon.a (mon.0) 314 : cluster [DBG] osdmap e8: 1 total, 1 up, 1 in 2026-03-10T13:44:05.717 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:44:05 vm00 bash[20748]: audit 2026-03-10T13:44:04.583372+0000 mon.a (mon.0) 315 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T13:44:05.717 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:44:05 vm00 bash[20748]: audit 2026-03-10T13:44:04.583372+0000 mon.a (mon.0) 315 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T13:44:05.717 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:44:05 vm00 bash[20748]: audit 2026-03-10T13:44:05.255287+0000 mon.a (mon.0) 316 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T13:44:05.717 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:44:05 vm00 bash[20748]: audit 2026-03-10T13:44:05.255287+0000 mon.a (mon.0) 316 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T13:44:05.717 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:44:05 vm00 bash[20748]: audit 2026-03-10T13:44:05.260410+0000 mon.a (mon.0) 317 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:44:05.717 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:44:05 vm00 bash[20748]: audit 2026-03-10T13:44:05.260410+0000 mon.a (mon.0) 317 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:44:05.717 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:44:05 vm00 bash[20748]: audit 2026-03-10T13:44:05.264939+0000 mon.a (mon.0) 318 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:44:05.717 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:44:05 vm00 bash[20748]: audit 2026-03-10T13:44:05.264939+0000 mon.a (mon.0) 318 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:44:05.749 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:44:05 vm07 bash[23044]: cluster 2026-03-10T13:44:04.208844+0000 mgr.a (mgr.14150) 91 : cluster [DBG] pgmap v44: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T13:44:05.749 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:44:05 vm07 bash[23044]: cluster 2026-03-10T13:44:04.208844+0000 mgr.a (mgr.14150) 91 : cluster [DBG] pgmap v44: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T13:44:05.749 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:44:05 vm07 bash[23044]: audit 2026-03-10T13:44:04.577704+0000 mon.a (mon.0) 312 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T13:44:05.749 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:44:05 vm07 bash[23044]: audit 2026-03-10T13:44:04.577704+0000 mon.a (mon.0) 312 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T13:44:05.749 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:44:05 vm07 bash[23044]: cluster 2026-03-10T13:44:04.582282+0000 mon.a (mon.0) 313 : cluster [INF] osd.0 [v2:192.168.123.100:6802/430820835,v1:192.168.123.100:6803/430820835] boot 2026-03-10T13:44:05.749 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:44:05 vm07 bash[23044]: cluster 2026-03-10T13:44:04.582282+0000 mon.a (mon.0) 313 : cluster [INF] osd.0 [v2:192.168.123.100:6802/430820835,v1:192.168.123.100:6803/430820835] boot 2026-03-10T13:44:05.749 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:44:05 vm07 bash[23044]: cluster 2026-03-10T13:44:04.582322+0000 mon.a (mon.0) 314 : cluster [DBG] osdmap e8: 1 total, 1 up, 1 in 2026-03-10T13:44:05.749 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:44:05 vm07 bash[23044]: cluster 2026-03-10T13:44:04.582322+0000 mon.a (mon.0) 314 : cluster [DBG] osdmap e8: 1 total, 1 up, 1 in 2026-03-10T13:44:05.749 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:44:05 vm07 bash[23044]: audit 2026-03-10T13:44:04.583372+0000 mon.a (mon.0) 315 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T13:44:05.749 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:44:05 vm07 bash[23044]: audit 2026-03-10T13:44:04.583372+0000 mon.a (mon.0) 315 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T13:44:05.749 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:44:05 vm07 bash[23044]: audit 2026-03-10T13:44:05.255287+0000 mon.a (mon.0) 316 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T13:44:05.749 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:44:05 vm07 bash[23044]: audit 2026-03-10T13:44:05.255287+0000 mon.a (mon.0) 316 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T13:44:05.749 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:44:05 vm07 bash[23044]: audit 2026-03-10T13:44:05.260410+0000 mon.a (mon.0) 317 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:44:05.749 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:44:05 vm07 bash[23044]: audit 2026-03-10T13:44:05.260410+0000 mon.a (mon.0) 317 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:44:05.749 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:44:05 vm07 bash[23044]: audit 2026-03-10T13:44:05.264939+0000 mon.a (mon.0) 318 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:44:05.749 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:44:05 vm07 bash[23044]: audit 2026-03-10T13:44:05.264939+0000 mon.a (mon.0) 318 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:44:05.837 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:44:05 vm08 bash[23387]: cluster 2026-03-10T13:44:04.208844+0000 mgr.a (mgr.14150) 91 : cluster [DBG] pgmap v44: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T13:44:05.837 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:44:05 vm08 bash[23387]: cluster 2026-03-10T13:44:04.208844+0000 mgr.a (mgr.14150) 91 : cluster [DBG] pgmap v44: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T13:44:05.837 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:44:05 vm08 bash[23387]: audit 2026-03-10T13:44:04.577704+0000 mon.a (mon.0) 312 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T13:44:05.837 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:44:05 vm08 bash[23387]: audit 2026-03-10T13:44:04.577704+0000 mon.a (mon.0) 312 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T13:44:05.837 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:44:05 vm08 bash[23387]: cluster 2026-03-10T13:44:04.582282+0000 mon.a (mon.0) 313 : cluster [INF] osd.0 [v2:192.168.123.100:6802/430820835,v1:192.168.123.100:6803/430820835] boot 2026-03-10T13:44:05.837 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:44:05 vm08 bash[23387]: cluster 2026-03-10T13:44:04.582282+0000 mon.a (mon.0) 313 : cluster [INF] osd.0 [v2:192.168.123.100:6802/430820835,v1:192.168.123.100:6803/430820835] boot 2026-03-10T13:44:05.837 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:44:05 vm08 bash[23387]: cluster 2026-03-10T13:44:04.582322+0000 mon.a (mon.0) 314 : cluster [DBG] osdmap e8: 1 total, 1 up, 1 in 2026-03-10T13:44:05.837 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:44:05 vm08 bash[23387]: cluster 2026-03-10T13:44:04.582322+0000 mon.a (mon.0) 314 : cluster [DBG] osdmap e8: 1 total, 1 up, 1 in 2026-03-10T13:44:05.837 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:44:05 vm08 bash[23387]: audit 2026-03-10T13:44:04.583372+0000 mon.a (mon.0) 315 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T13:44:05.837 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:44:05 vm08 bash[23387]: audit 2026-03-10T13:44:04.583372+0000 mon.a (mon.0) 315 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T13:44:05.837 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:44:05 vm08 bash[23387]: audit 2026-03-10T13:44:05.255287+0000 mon.a (mon.0) 316 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T13:44:05.837 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:44:05 vm08 bash[23387]: audit 2026-03-10T13:44:05.255287+0000 mon.a (mon.0) 316 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T13:44:05.838 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:44:05 vm08 bash[23387]: audit 2026-03-10T13:44:05.260410+0000 mon.a (mon.0) 317 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:44:05.838 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:44:05 vm08 bash[23387]: audit 2026-03-10T13:44:05.260410+0000 mon.a (mon.0) 317 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:44:05.838 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:44:05 vm08 bash[23387]: audit 2026-03-10T13:44:05.264939+0000 mon.a (mon.0) 318 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:44:05.838 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:44:05 vm08 bash[23387]: audit 2026-03-10T13:44:05.264939+0000 mon.a (mon.0) 318 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:44:06.966 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:44:06 vm00 bash[20748]: cluster 2026-03-10T13:44:05.594931+0000 mon.a (mon.0) 319 : cluster [DBG] osdmap e9: 1 total, 1 up, 1 in 2026-03-10T13:44:06.966 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:44:06 vm00 bash[20748]: cluster 2026-03-10T13:44:05.594931+0000 mon.a (mon.0) 319 : cluster [DBG] osdmap e9: 1 total, 1 up, 1 in 2026-03-10T13:44:06.999 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:44:06 vm07 bash[23044]: cluster 2026-03-10T13:44:05.594931+0000 mon.a (mon.0) 319 : cluster [DBG] osdmap e9: 1 total, 1 up, 1 in 2026-03-10T13:44:06.999 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:44:06 vm07 bash[23044]: cluster 2026-03-10T13:44:05.594931+0000 mon.a (mon.0) 319 : cluster [DBG] osdmap e9: 1 total, 1 up, 1 in 2026-03-10T13:44:07.087 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:44:06 vm08 bash[23387]: cluster 2026-03-10T13:44:05.594931+0000 mon.a (mon.0) 319 : cluster [DBG] osdmap e9: 1 total, 1 up, 1 in 2026-03-10T13:44:07.087 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:44:06 vm08 bash[23387]: cluster 2026-03-10T13:44:05.594931+0000 mon.a (mon.0) 319 : cluster [DBG] osdmap e9: 1 total, 1 up, 1 in 2026-03-10T13:44:07.966 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:44:07 vm00 bash[20748]: cluster 2026-03-10T13:44:06.209033+0000 mgr.a (mgr.14150) 92 : cluster [DBG] pgmap v47: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-10T13:44:07.966 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:44:07 vm00 bash[20748]: cluster 2026-03-10T13:44:06.209033+0000 mgr.a (mgr.14150) 92 : cluster [DBG] pgmap v47: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-10T13:44:07.999 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:44:07 vm07 bash[23044]: cluster 2026-03-10T13:44:06.209033+0000 mgr.a (mgr.14150) 92 : cluster [DBG] pgmap v47: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-10T13:44:07.999 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:44:07 vm07 bash[23044]: cluster 2026-03-10T13:44:06.209033+0000 mgr.a (mgr.14150) 92 : cluster [DBG] pgmap v47: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-10T13:44:08.087 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:44:07 vm08 bash[23387]: cluster 2026-03-10T13:44:06.209033+0000 mgr.a (mgr.14150) 92 : cluster [DBG] pgmap v47: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-10T13:44:08.087 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:44:07 vm08 bash[23387]: cluster 2026-03-10T13:44:06.209033+0000 mgr.a (mgr.14150) 92 : cluster [DBG] pgmap v47: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-10T13:44:09.952 INFO:teuthology.orchestra.run.vm07.stderr:Inferring config /var/lib/ceph/c9620084-1c86-11f1-bcc5-e3fb709eab0a/mon.b/config 2026-03-10T13:44:09.966 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:44:09 vm00 bash[20748]: cluster 2026-03-10T13:44:08.209279+0000 mgr.a (mgr.14150) 93 : cluster [DBG] pgmap v48: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-10T13:44:09.966 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:44:09 vm00 bash[20748]: cluster 2026-03-10T13:44:08.209279+0000 mgr.a (mgr.14150) 93 : cluster [DBG] pgmap v48: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-10T13:44:09.974 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:44:09 vm07 bash[23044]: cluster 2026-03-10T13:44:08.209279+0000 mgr.a (mgr.14150) 93 : cluster [DBG] pgmap v48: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-10T13:44:09.974 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:44:09 vm07 bash[23044]: cluster 2026-03-10T13:44:08.209279+0000 mgr.a (mgr.14150) 93 : cluster [DBG] pgmap v48: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-10T13:44:10.087 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:44:09 vm08 bash[23387]: cluster 2026-03-10T13:44:08.209279+0000 mgr.a (mgr.14150) 93 : cluster [DBG] pgmap v48: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-10T13:44:10.087 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:44:09 vm08 bash[23387]: cluster 2026-03-10T13:44:08.209279+0000 mgr.a (mgr.14150) 93 : cluster [DBG] pgmap v48: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-10T13:44:10.940 INFO:teuthology.orchestra.run.vm07.stdout: 2026-03-10T13:44:10.957 DEBUG:teuthology.orchestra.run.vm07:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid c9620084-1c86-11f1-bcc5-e3fb709eab0a -- ceph orch daemon add osd vm07:/dev/vde 2026-03-10T13:44:12.216 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:44:11 vm00 bash[20748]: cluster 2026-03-10T13:44:10.209534+0000 mgr.a (mgr.14150) 94 : cluster [DBG] pgmap v49: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-10T13:44:12.216 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:44:11 vm00 bash[20748]: cluster 2026-03-10T13:44:10.209534+0000 mgr.a (mgr.14150) 94 : cluster [DBG] pgmap v49: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-10T13:44:12.216 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:44:11 vm00 bash[20748]: cephadm 2026-03-10T13:44:10.901271+0000 mgr.a (mgr.14150) 95 : cephadm [INF] Detected new or changed devices on vm00 2026-03-10T13:44:12.216 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:44:11 vm00 bash[20748]: cephadm 2026-03-10T13:44:10.901271+0000 mgr.a (mgr.14150) 95 : cephadm [INF] Detected new or changed devices on vm00 2026-03-10T13:44:12.216 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:44:11 vm00 bash[20748]: audit 2026-03-10T13:44:10.907612+0000 mon.a (mon.0) 320 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:44:12.216 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:44:11 vm00 bash[20748]: audit 2026-03-10T13:44:10.907612+0000 mon.a (mon.0) 320 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:44:12.216 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:44:11 vm00 bash[20748]: audit 2026-03-10T13:44:10.912072+0000 mon.a (mon.0) 321 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:44:12.216 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:44:11 vm00 bash[20748]: audit 2026-03-10T13:44:10.912072+0000 mon.a (mon.0) 321 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:44:12.216 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:44:11 vm00 bash[20748]: audit 2026-03-10T13:44:10.912936+0000 mon.a (mon.0) 322 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"}]: dispatch 2026-03-10T13:44:12.216 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:44:11 vm00 bash[20748]: audit 2026-03-10T13:44:10.912936+0000 mon.a (mon.0) 322 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"}]: dispatch 2026-03-10T13:44:12.216 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:44:11 vm00 bash[20748]: cephadm 2026-03-10T13:44:10.913284+0000 mgr.a (mgr.14150) 96 : cephadm [INF] Adjusting osd_memory_target on vm00 to 455.7M 2026-03-10T13:44:12.217 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:44:11 vm00 bash[20748]: cephadm 2026-03-10T13:44:10.913284+0000 mgr.a (mgr.14150) 96 : cephadm [INF] Adjusting osd_memory_target on vm00 to 455.7M 2026-03-10T13:44:12.217 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:44:11 vm00 bash[20748]: cephadm 2026-03-10T13:44:10.913650+0000 mgr.a (mgr.14150) 97 : cephadm [WRN] Unable to set osd_memory_target on vm00 to 477921689: error parsing value: Value '477921689' is below minimum 939524096 2026-03-10T13:44:12.217 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:44:11 vm00 bash[20748]: cephadm 2026-03-10T13:44:10.913650+0000 mgr.a (mgr.14150) 97 : cephadm [WRN] Unable to set osd_memory_target on vm00 to 477921689: error parsing value: Value '477921689' is below minimum 939524096 2026-03-10T13:44:12.217 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:44:11 vm00 bash[20748]: audit 2026-03-10T13:44:10.913966+0000 mon.a (mon.0) 323 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T13:44:12.217 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:44:11 vm00 bash[20748]: audit 2026-03-10T13:44:10.913966+0000 mon.a (mon.0) 323 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T13:44:12.217 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:44:11 vm00 bash[20748]: audit 2026-03-10T13:44:10.914298+0000 mon.a (mon.0) 324 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T13:44:12.217 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:44:11 vm00 bash[20748]: audit 2026-03-10T13:44:10.914298+0000 mon.a (mon.0) 324 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T13:44:12.217 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:44:11 vm00 bash[20748]: audit 2026-03-10T13:44:10.918998+0000 mon.a (mon.0) 325 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:44:12.217 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:44:11 vm00 bash[20748]: audit 2026-03-10T13:44:10.918998+0000 mon.a (mon.0) 325 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:44:12.249 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:44:11 vm07 bash[23044]: cluster 2026-03-10T13:44:10.209534+0000 mgr.a (mgr.14150) 94 : cluster [DBG] pgmap v49: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-10T13:44:12.249 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:44:11 vm07 bash[23044]: cluster 2026-03-10T13:44:10.209534+0000 mgr.a (mgr.14150) 94 : cluster [DBG] pgmap v49: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-10T13:44:12.249 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:44:11 vm07 bash[23044]: cephadm 2026-03-10T13:44:10.901271+0000 mgr.a (mgr.14150) 95 : cephadm [INF] Detected new or changed devices on vm00 2026-03-10T13:44:12.249 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:44:11 vm07 bash[23044]: cephadm 2026-03-10T13:44:10.901271+0000 mgr.a (mgr.14150) 95 : cephadm [INF] Detected new or changed devices on vm00 2026-03-10T13:44:12.249 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:44:11 vm07 bash[23044]: audit 2026-03-10T13:44:10.907612+0000 mon.a (mon.0) 320 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:44:12.249 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:44:11 vm07 bash[23044]: audit 2026-03-10T13:44:10.907612+0000 mon.a (mon.0) 320 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:44:12.249 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:44:11 vm07 bash[23044]: audit 2026-03-10T13:44:10.912072+0000 mon.a (mon.0) 321 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:44:12.249 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:44:11 vm07 bash[23044]: audit 2026-03-10T13:44:10.912072+0000 mon.a (mon.0) 321 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:44:12.249 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:44:11 vm07 bash[23044]: audit 2026-03-10T13:44:10.912936+0000 mon.a (mon.0) 322 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"}]: dispatch 2026-03-10T13:44:12.249 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:44:11 vm07 bash[23044]: audit 2026-03-10T13:44:10.912936+0000 mon.a (mon.0) 322 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"}]: dispatch 2026-03-10T13:44:12.249 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:44:11 vm07 bash[23044]: cephadm 2026-03-10T13:44:10.913284+0000 mgr.a (mgr.14150) 96 : cephadm [INF] Adjusting osd_memory_target on vm00 to 455.7M 2026-03-10T13:44:12.249 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:44:11 vm07 bash[23044]: cephadm 2026-03-10T13:44:10.913284+0000 mgr.a (mgr.14150) 96 : cephadm [INF] Adjusting osd_memory_target on vm00 to 455.7M 2026-03-10T13:44:12.249 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:44:11 vm07 bash[23044]: cephadm 2026-03-10T13:44:10.913650+0000 mgr.a (mgr.14150) 97 : cephadm [WRN] Unable to set osd_memory_target on vm00 to 477921689: error parsing value: Value '477921689' is below minimum 939524096 2026-03-10T13:44:12.249 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:44:11 vm07 bash[23044]: cephadm 2026-03-10T13:44:10.913650+0000 mgr.a (mgr.14150) 97 : cephadm [WRN] Unable to set osd_memory_target on vm00 to 477921689: error parsing value: Value '477921689' is below minimum 939524096 2026-03-10T13:44:12.249 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:44:11 vm07 bash[23044]: audit 2026-03-10T13:44:10.913966+0000 mon.a (mon.0) 323 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T13:44:12.249 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:44:11 vm07 bash[23044]: audit 2026-03-10T13:44:10.913966+0000 mon.a (mon.0) 323 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T13:44:12.249 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:44:11 vm07 bash[23044]: audit 2026-03-10T13:44:10.914298+0000 mon.a (mon.0) 324 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T13:44:12.249 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:44:11 vm07 bash[23044]: audit 2026-03-10T13:44:10.914298+0000 mon.a (mon.0) 324 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T13:44:12.249 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:44:11 vm07 bash[23044]: audit 2026-03-10T13:44:10.918998+0000 mon.a (mon.0) 325 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:44:12.249 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:44:11 vm07 bash[23044]: audit 2026-03-10T13:44:10.918998+0000 mon.a (mon.0) 325 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:44:12.337 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:44:11 vm08 bash[23387]: cluster 2026-03-10T13:44:10.209534+0000 mgr.a (mgr.14150) 94 : cluster [DBG] pgmap v49: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-10T13:44:12.337 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:44:11 vm08 bash[23387]: cluster 2026-03-10T13:44:10.209534+0000 mgr.a (mgr.14150) 94 : cluster [DBG] pgmap v49: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-10T13:44:12.337 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:44:11 vm08 bash[23387]: cephadm 2026-03-10T13:44:10.901271+0000 mgr.a (mgr.14150) 95 : cephadm [INF] Detected new or changed devices on vm00 2026-03-10T13:44:12.337 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:44:11 vm08 bash[23387]: cephadm 2026-03-10T13:44:10.901271+0000 mgr.a (mgr.14150) 95 : cephadm [INF] Detected new or changed devices on vm00 2026-03-10T13:44:12.337 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:44:11 vm08 bash[23387]: audit 2026-03-10T13:44:10.907612+0000 mon.a (mon.0) 320 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:44:12.337 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:44:11 vm08 bash[23387]: audit 2026-03-10T13:44:10.907612+0000 mon.a (mon.0) 320 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:44:12.337 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:44:11 vm08 bash[23387]: audit 2026-03-10T13:44:10.912072+0000 mon.a (mon.0) 321 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:44:12.337 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:44:11 vm08 bash[23387]: audit 2026-03-10T13:44:10.912072+0000 mon.a (mon.0) 321 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:44:12.337 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:44:11 vm08 bash[23387]: audit 2026-03-10T13:44:10.912936+0000 mon.a (mon.0) 322 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"}]: dispatch 2026-03-10T13:44:12.337 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:44:11 vm08 bash[23387]: audit 2026-03-10T13:44:10.912936+0000 mon.a (mon.0) 322 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"}]: dispatch 2026-03-10T13:44:12.337 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:44:11 vm08 bash[23387]: cephadm 2026-03-10T13:44:10.913284+0000 mgr.a (mgr.14150) 96 : cephadm [INF] Adjusting osd_memory_target on vm00 to 455.7M 2026-03-10T13:44:12.337 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:44:11 vm08 bash[23387]: cephadm 2026-03-10T13:44:10.913284+0000 mgr.a (mgr.14150) 96 : cephadm [INF] Adjusting osd_memory_target on vm00 to 455.7M 2026-03-10T13:44:12.337 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:44:11 vm08 bash[23387]: cephadm 2026-03-10T13:44:10.913650+0000 mgr.a (mgr.14150) 97 : cephadm [WRN] Unable to set osd_memory_target on vm00 to 477921689: error parsing value: Value '477921689' is below minimum 939524096 2026-03-10T13:44:12.337 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:44:11 vm08 bash[23387]: cephadm 2026-03-10T13:44:10.913650+0000 mgr.a (mgr.14150) 97 : cephadm [WRN] Unable to set osd_memory_target on vm00 to 477921689: error parsing value: Value '477921689' is below minimum 939524096 2026-03-10T13:44:12.337 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:44:11 vm08 bash[23387]: audit 2026-03-10T13:44:10.913966+0000 mon.a (mon.0) 323 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T13:44:12.337 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:44:11 vm08 bash[23387]: audit 2026-03-10T13:44:10.913966+0000 mon.a (mon.0) 323 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T13:44:12.337 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:44:11 vm08 bash[23387]: audit 2026-03-10T13:44:10.914298+0000 mon.a (mon.0) 324 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T13:44:12.337 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:44:11 vm08 bash[23387]: audit 2026-03-10T13:44:10.914298+0000 mon.a (mon.0) 324 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T13:44:12.337 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:44:11 vm08 bash[23387]: audit 2026-03-10T13:44:10.918998+0000 mon.a (mon.0) 325 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:44:12.337 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:44:11 vm08 bash[23387]: audit 2026-03-10T13:44:10.918998+0000 mon.a (mon.0) 325 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:44:14.216 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:44:13 vm00 bash[20748]: cluster 2026-03-10T13:44:12.209741+0000 mgr.a (mgr.14150) 98 : cluster [DBG] pgmap v50: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-10T13:44:14.216 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:44:13 vm00 bash[20748]: cluster 2026-03-10T13:44:12.209741+0000 mgr.a (mgr.14150) 98 : cluster [DBG] pgmap v50: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-10T13:44:14.249 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:44:13 vm07 bash[23044]: cluster 2026-03-10T13:44:12.209741+0000 mgr.a (mgr.14150) 98 : cluster [DBG] pgmap v50: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-10T13:44:14.249 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:44:13 vm07 bash[23044]: cluster 2026-03-10T13:44:12.209741+0000 mgr.a (mgr.14150) 98 : cluster [DBG] pgmap v50: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-10T13:44:14.337 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:44:13 vm08 bash[23387]: cluster 2026-03-10T13:44:12.209741+0000 mgr.a (mgr.14150) 98 : cluster [DBG] pgmap v50: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-10T13:44:14.337 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:44:13 vm08 bash[23387]: cluster 2026-03-10T13:44:12.209741+0000 mgr.a (mgr.14150) 98 : cluster [DBG] pgmap v50: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-10T13:44:15.573 INFO:teuthology.orchestra.run.vm07.stderr:Inferring config /var/lib/ceph/c9620084-1c86-11f1-bcc5-e3fb709eab0a/mon.b/config 2026-03-10T13:44:16.216 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:44:15 vm00 bash[20748]: cluster 2026-03-10T13:44:14.209975+0000 mgr.a (mgr.14150) 99 : cluster [DBG] pgmap v51: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-10T13:44:16.216 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:44:15 vm00 bash[20748]: cluster 2026-03-10T13:44:14.209975+0000 mgr.a (mgr.14150) 99 : cluster [DBG] pgmap v51: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-10T13:44:16.216 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:44:15 vm00 bash[20748]: audit 2026-03-10T13:44:15.894033+0000 mon.a (mon.0) 326 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-10T13:44:16.216 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:44:15 vm00 bash[20748]: audit 2026-03-10T13:44:15.894033+0000 mon.a (mon.0) 326 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-10T13:44:16.216 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:44:15 vm00 bash[20748]: audit 2026-03-10T13:44:15.895238+0000 mon.a (mon.0) 327 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-10T13:44:16.216 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:44:15 vm00 bash[20748]: audit 2026-03-10T13:44:15.895238+0000 mon.a (mon.0) 327 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-10T13:44:16.216 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:44:15 vm00 bash[20748]: audit 2026-03-10T13:44:15.895604+0000 mon.a (mon.0) 328 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T13:44:16.216 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:44:15 vm00 bash[20748]: audit 2026-03-10T13:44:15.895604+0000 mon.a (mon.0) 328 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T13:44:16.249 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:44:15 vm07 bash[23044]: cluster 2026-03-10T13:44:14.209975+0000 mgr.a (mgr.14150) 99 : cluster [DBG] pgmap v51: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-10T13:44:16.249 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:44:15 vm07 bash[23044]: cluster 2026-03-10T13:44:14.209975+0000 mgr.a (mgr.14150) 99 : cluster [DBG] pgmap v51: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-10T13:44:16.249 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:44:15 vm07 bash[23044]: audit 2026-03-10T13:44:15.894033+0000 mon.a (mon.0) 326 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-10T13:44:16.249 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:44:15 vm07 bash[23044]: audit 2026-03-10T13:44:15.894033+0000 mon.a (mon.0) 326 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-10T13:44:16.249 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:44:15 vm07 bash[23044]: audit 2026-03-10T13:44:15.895238+0000 mon.a (mon.0) 327 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-10T13:44:16.249 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:44:15 vm07 bash[23044]: audit 2026-03-10T13:44:15.895238+0000 mon.a (mon.0) 327 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-10T13:44:16.249 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:44:15 vm07 bash[23044]: audit 2026-03-10T13:44:15.895604+0000 mon.a (mon.0) 328 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T13:44:16.249 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:44:15 vm07 bash[23044]: audit 2026-03-10T13:44:15.895604+0000 mon.a (mon.0) 328 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T13:44:16.337 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:44:15 vm08 bash[23387]: cluster 2026-03-10T13:44:14.209975+0000 mgr.a (mgr.14150) 99 : cluster [DBG] pgmap v51: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-10T13:44:16.337 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:44:15 vm08 bash[23387]: cluster 2026-03-10T13:44:14.209975+0000 mgr.a (mgr.14150) 99 : cluster [DBG] pgmap v51: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-10T13:44:16.337 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:44:15 vm08 bash[23387]: audit 2026-03-10T13:44:15.894033+0000 mon.a (mon.0) 326 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-10T13:44:16.337 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:44:15 vm08 bash[23387]: audit 2026-03-10T13:44:15.894033+0000 mon.a (mon.0) 326 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-10T13:44:16.337 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:44:15 vm08 bash[23387]: audit 2026-03-10T13:44:15.895238+0000 mon.a (mon.0) 327 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-10T13:44:16.337 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:44:15 vm08 bash[23387]: audit 2026-03-10T13:44:15.895238+0000 mon.a (mon.0) 327 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-10T13:44:16.337 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:44:15 vm08 bash[23387]: audit 2026-03-10T13:44:15.895604+0000 mon.a (mon.0) 328 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T13:44:16.337 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:44:15 vm08 bash[23387]: audit 2026-03-10T13:44:15.895604+0000 mon.a (mon.0) 328 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T13:44:17.216 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:44:16 vm00 bash[20748]: audit 2026-03-10T13:44:15.892652+0000 mgr.a (mgr.14150) 100 : audit [DBG] from='client.24128 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm07:/dev/vde", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T13:44:17.216 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:44:16 vm00 bash[20748]: audit 2026-03-10T13:44:15.892652+0000 mgr.a (mgr.14150) 100 : audit [DBG] from='client.24128 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm07:/dev/vde", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T13:44:17.249 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:44:16 vm07 bash[23044]: audit 2026-03-10T13:44:15.892652+0000 mgr.a (mgr.14150) 100 : audit [DBG] from='client.24128 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm07:/dev/vde", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T13:44:17.249 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:44:16 vm07 bash[23044]: audit 2026-03-10T13:44:15.892652+0000 mgr.a (mgr.14150) 100 : audit [DBG] from='client.24128 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm07:/dev/vde", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T13:44:17.337 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:44:16 vm08 bash[23387]: audit 2026-03-10T13:44:15.892652+0000 mgr.a (mgr.14150) 100 : audit [DBG] from='client.24128 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm07:/dev/vde", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T13:44:17.337 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:44:16 vm08 bash[23387]: audit 2026-03-10T13:44:15.892652+0000 mgr.a (mgr.14150) 100 : audit [DBG] from='client.24128 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm07:/dev/vde", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T13:44:18.249 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:44:17 vm07 bash[23044]: cluster 2026-03-10T13:44:16.210161+0000 mgr.a (mgr.14150) 101 : cluster [DBG] pgmap v52: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-10T13:44:18.249 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:44:17 vm07 bash[23044]: cluster 2026-03-10T13:44:16.210161+0000 mgr.a (mgr.14150) 101 : cluster [DBG] pgmap v52: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-10T13:44:18.337 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:44:17 vm08 bash[23387]: cluster 2026-03-10T13:44:16.210161+0000 mgr.a (mgr.14150) 101 : cluster [DBG] pgmap v52: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-10T13:44:18.337 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:44:17 vm08 bash[23387]: cluster 2026-03-10T13:44:16.210161+0000 mgr.a (mgr.14150) 101 : cluster [DBG] pgmap v52: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-10T13:44:18.466 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:44:17 vm00 bash[20748]: cluster 2026-03-10T13:44:16.210161+0000 mgr.a (mgr.14150) 101 : cluster [DBG] pgmap v52: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-10T13:44:18.466 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:44:17 vm00 bash[20748]: cluster 2026-03-10T13:44:16.210161+0000 mgr.a (mgr.14150) 101 : cluster [DBG] pgmap v52: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-10T13:44:20.249 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:44:19 vm07 bash[23044]: cluster 2026-03-10T13:44:18.210399+0000 mgr.a (mgr.14150) 102 : cluster [DBG] pgmap v53: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-10T13:44:20.249 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:44:19 vm07 bash[23044]: cluster 2026-03-10T13:44:18.210399+0000 mgr.a (mgr.14150) 102 : cluster [DBG] pgmap v53: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-10T13:44:20.337 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:44:19 vm08 bash[23387]: cluster 2026-03-10T13:44:18.210399+0000 mgr.a (mgr.14150) 102 : cluster [DBG] pgmap v53: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-10T13:44:20.337 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:44:19 vm08 bash[23387]: cluster 2026-03-10T13:44:18.210399+0000 mgr.a (mgr.14150) 102 : cluster [DBG] pgmap v53: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-10T13:44:20.466 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:44:19 vm00 bash[20748]: cluster 2026-03-10T13:44:18.210399+0000 mgr.a (mgr.14150) 102 : cluster [DBG] pgmap v53: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-10T13:44:20.466 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:44:19 vm00 bash[20748]: cluster 2026-03-10T13:44:18.210399+0000 mgr.a (mgr.14150) 102 : cluster [DBG] pgmap v53: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-10T13:44:22.249 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:44:21 vm07 bash[23044]: cluster 2026-03-10T13:44:20.210659+0000 mgr.a (mgr.14150) 103 : cluster [DBG] pgmap v54: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-10T13:44:22.249 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:44:21 vm07 bash[23044]: cluster 2026-03-10T13:44:20.210659+0000 mgr.a (mgr.14150) 103 : cluster [DBG] pgmap v54: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-10T13:44:22.249 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:44:21 vm07 bash[23044]: audit 2026-03-10T13:44:21.517617+0000 mon.c (mon.1) 4 : audit [INF] from='client.? 192.168.123.107:0/684947495' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "62e51a83-b44b-465f-8f6e-e14cd4837af5"}]: dispatch 2026-03-10T13:44:22.249 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:44:21 vm07 bash[23044]: audit 2026-03-10T13:44:21.517617+0000 mon.c (mon.1) 4 : audit [INF] from='client.? 192.168.123.107:0/684947495' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "62e51a83-b44b-465f-8f6e-e14cd4837af5"}]: dispatch 2026-03-10T13:44:22.249 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:44:21 vm07 bash[23044]: audit 2026-03-10T13:44:21.517945+0000 mon.a (mon.0) 329 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "62e51a83-b44b-465f-8f6e-e14cd4837af5"}]: dispatch 2026-03-10T13:44:22.249 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:44:21 vm07 bash[23044]: audit 2026-03-10T13:44:21.517945+0000 mon.a (mon.0) 329 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "62e51a83-b44b-465f-8f6e-e14cd4837af5"}]: dispatch 2026-03-10T13:44:22.249 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:44:21 vm07 bash[23044]: audit 2026-03-10T13:44:21.538539+0000 mon.a (mon.0) 330 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "62e51a83-b44b-465f-8f6e-e14cd4837af5"}]': finished 2026-03-10T13:44:22.249 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:44:21 vm07 bash[23044]: audit 2026-03-10T13:44:21.538539+0000 mon.a (mon.0) 330 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "62e51a83-b44b-465f-8f6e-e14cd4837af5"}]': finished 2026-03-10T13:44:22.249 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:44:21 vm07 bash[23044]: cluster 2026-03-10T13:44:21.547370+0000 mon.a (mon.0) 331 : cluster [DBG] osdmap e10: 2 total, 1 up, 2 in 2026-03-10T13:44:22.249 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:44:21 vm07 bash[23044]: cluster 2026-03-10T13:44:21.547370+0000 mon.a (mon.0) 331 : cluster [DBG] osdmap e10: 2 total, 1 up, 2 in 2026-03-10T13:44:22.249 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:44:21 vm07 bash[23044]: audit 2026-03-10T13:44:21.547553+0000 mon.a (mon.0) 332 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T13:44:22.249 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:44:21 vm07 bash[23044]: audit 2026-03-10T13:44:21.547553+0000 mon.a (mon.0) 332 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T13:44:22.337 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:44:21 vm08 bash[23387]: cluster 2026-03-10T13:44:20.210659+0000 mgr.a (mgr.14150) 103 : cluster [DBG] pgmap v54: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-10T13:44:22.337 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:44:21 vm08 bash[23387]: cluster 2026-03-10T13:44:20.210659+0000 mgr.a (mgr.14150) 103 : cluster [DBG] pgmap v54: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-10T13:44:22.337 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:44:21 vm08 bash[23387]: audit 2026-03-10T13:44:21.517617+0000 mon.c (mon.1) 4 : audit [INF] from='client.? 192.168.123.107:0/684947495' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "62e51a83-b44b-465f-8f6e-e14cd4837af5"}]: dispatch 2026-03-10T13:44:22.337 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:44:21 vm08 bash[23387]: audit 2026-03-10T13:44:21.517617+0000 mon.c (mon.1) 4 : audit [INF] from='client.? 192.168.123.107:0/684947495' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "62e51a83-b44b-465f-8f6e-e14cd4837af5"}]: dispatch 2026-03-10T13:44:22.338 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:44:21 vm08 bash[23387]: audit 2026-03-10T13:44:21.517945+0000 mon.a (mon.0) 329 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "62e51a83-b44b-465f-8f6e-e14cd4837af5"}]: dispatch 2026-03-10T13:44:22.338 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:44:21 vm08 bash[23387]: audit 2026-03-10T13:44:21.517945+0000 mon.a (mon.0) 329 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "62e51a83-b44b-465f-8f6e-e14cd4837af5"}]: dispatch 2026-03-10T13:44:22.338 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:44:21 vm08 bash[23387]: audit 2026-03-10T13:44:21.538539+0000 mon.a (mon.0) 330 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "62e51a83-b44b-465f-8f6e-e14cd4837af5"}]': finished 2026-03-10T13:44:22.338 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:44:21 vm08 bash[23387]: audit 2026-03-10T13:44:21.538539+0000 mon.a (mon.0) 330 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "62e51a83-b44b-465f-8f6e-e14cd4837af5"}]': finished 2026-03-10T13:44:22.338 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:44:21 vm08 bash[23387]: cluster 2026-03-10T13:44:21.547370+0000 mon.a (mon.0) 331 : cluster [DBG] osdmap e10: 2 total, 1 up, 2 in 2026-03-10T13:44:22.338 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:44:21 vm08 bash[23387]: cluster 2026-03-10T13:44:21.547370+0000 mon.a (mon.0) 331 : cluster [DBG] osdmap e10: 2 total, 1 up, 2 in 2026-03-10T13:44:22.338 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:44:21 vm08 bash[23387]: audit 2026-03-10T13:44:21.547553+0000 mon.a (mon.0) 332 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T13:44:22.338 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:44:21 vm08 bash[23387]: audit 2026-03-10T13:44:21.547553+0000 mon.a (mon.0) 332 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T13:44:22.466 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:44:21 vm00 bash[20748]: cluster 2026-03-10T13:44:20.210659+0000 mgr.a (mgr.14150) 103 : cluster [DBG] pgmap v54: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-10T13:44:22.466 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:44:21 vm00 bash[20748]: cluster 2026-03-10T13:44:20.210659+0000 mgr.a (mgr.14150) 103 : cluster [DBG] pgmap v54: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-10T13:44:22.467 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:44:21 vm00 bash[20748]: audit 2026-03-10T13:44:21.517617+0000 mon.c (mon.1) 4 : audit [INF] from='client.? 192.168.123.107:0/684947495' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "62e51a83-b44b-465f-8f6e-e14cd4837af5"}]: dispatch 2026-03-10T13:44:22.467 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:44:21 vm00 bash[20748]: audit 2026-03-10T13:44:21.517617+0000 mon.c (mon.1) 4 : audit [INF] from='client.? 192.168.123.107:0/684947495' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "62e51a83-b44b-465f-8f6e-e14cd4837af5"}]: dispatch 2026-03-10T13:44:22.467 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:44:21 vm00 bash[20748]: audit 2026-03-10T13:44:21.517945+0000 mon.a (mon.0) 329 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "62e51a83-b44b-465f-8f6e-e14cd4837af5"}]: dispatch 2026-03-10T13:44:22.467 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:44:21 vm00 bash[20748]: audit 2026-03-10T13:44:21.517945+0000 mon.a (mon.0) 329 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "62e51a83-b44b-465f-8f6e-e14cd4837af5"}]: dispatch 2026-03-10T13:44:22.467 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:44:21 vm00 bash[20748]: audit 2026-03-10T13:44:21.538539+0000 mon.a (mon.0) 330 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "62e51a83-b44b-465f-8f6e-e14cd4837af5"}]': finished 2026-03-10T13:44:22.467 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:44:21 vm00 bash[20748]: audit 2026-03-10T13:44:21.538539+0000 mon.a (mon.0) 330 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "62e51a83-b44b-465f-8f6e-e14cd4837af5"}]': finished 2026-03-10T13:44:22.467 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:44:21 vm00 bash[20748]: cluster 2026-03-10T13:44:21.547370+0000 mon.a (mon.0) 331 : cluster [DBG] osdmap e10: 2 total, 1 up, 2 in 2026-03-10T13:44:22.467 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:44:21 vm00 bash[20748]: cluster 2026-03-10T13:44:21.547370+0000 mon.a (mon.0) 331 : cluster [DBG] osdmap e10: 2 total, 1 up, 2 in 2026-03-10T13:44:22.467 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:44:21 vm00 bash[20748]: audit 2026-03-10T13:44:21.547553+0000 mon.a (mon.0) 332 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T13:44:22.467 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:44:21 vm00 bash[20748]: audit 2026-03-10T13:44:21.547553+0000 mon.a (mon.0) 332 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T13:44:23.249 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:44:22 vm07 bash[23044]: audit 2026-03-10T13:44:22.173052+0000 mon.b (mon.2) 6 : audit [DBG] from='client.? 192.168.123.107:0/4174093098' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-10T13:44:23.249 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:44:22 vm07 bash[23044]: audit 2026-03-10T13:44:22.173052+0000 mon.b (mon.2) 6 : audit [DBG] from='client.? 192.168.123.107:0/4174093098' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-10T13:44:23.337 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:44:22 vm08 bash[23387]: audit 2026-03-10T13:44:22.173052+0000 mon.b (mon.2) 6 : audit [DBG] from='client.? 192.168.123.107:0/4174093098' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-10T13:44:23.337 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:44:22 vm08 bash[23387]: audit 2026-03-10T13:44:22.173052+0000 mon.b (mon.2) 6 : audit [DBG] from='client.? 192.168.123.107:0/4174093098' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-10T13:44:23.466 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:44:22 vm00 bash[20748]: audit 2026-03-10T13:44:22.173052+0000 mon.b (mon.2) 6 : audit [DBG] from='client.? 192.168.123.107:0/4174093098' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-10T13:44:23.466 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:44:22 vm00 bash[20748]: audit 2026-03-10T13:44:22.173052+0000 mon.b (mon.2) 6 : audit [DBG] from='client.? 192.168.123.107:0/4174093098' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-10T13:44:24.249 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:44:23 vm07 bash[23044]: cluster 2026-03-10T13:44:22.210884+0000 mgr.a (mgr.14150) 104 : cluster [DBG] pgmap v56: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-10T13:44:24.249 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:44:23 vm07 bash[23044]: cluster 2026-03-10T13:44:22.210884+0000 mgr.a (mgr.14150) 104 : cluster [DBG] pgmap v56: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-10T13:44:24.337 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:44:23 vm08 bash[23387]: cluster 2026-03-10T13:44:22.210884+0000 mgr.a (mgr.14150) 104 : cluster [DBG] pgmap v56: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-10T13:44:24.337 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:44:23 vm08 bash[23387]: cluster 2026-03-10T13:44:22.210884+0000 mgr.a (mgr.14150) 104 : cluster [DBG] pgmap v56: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-10T13:44:24.466 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:44:23 vm00 bash[20748]: cluster 2026-03-10T13:44:22.210884+0000 mgr.a (mgr.14150) 104 : cluster [DBG] pgmap v56: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-10T13:44:24.466 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:44:23 vm00 bash[20748]: cluster 2026-03-10T13:44:22.210884+0000 mgr.a (mgr.14150) 104 : cluster [DBG] pgmap v56: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-10T13:44:25.998 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:44:25 vm07 bash[23044]: cluster 2026-03-10T13:44:24.211132+0000 mgr.a (mgr.14150) 105 : cluster [DBG] pgmap v57: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-10T13:44:25.998 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:44:25 vm07 bash[23044]: cluster 2026-03-10T13:44:24.211132+0000 mgr.a (mgr.14150) 105 : cluster [DBG] pgmap v57: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-10T13:44:26.337 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:44:25 vm08 bash[23387]: cluster 2026-03-10T13:44:24.211132+0000 mgr.a (mgr.14150) 105 : cluster [DBG] pgmap v57: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-10T13:44:26.337 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:44:25 vm08 bash[23387]: cluster 2026-03-10T13:44:24.211132+0000 mgr.a (mgr.14150) 105 : cluster [DBG] pgmap v57: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-10T13:44:26.466 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:44:25 vm00 bash[20748]: cluster 2026-03-10T13:44:24.211132+0000 mgr.a (mgr.14150) 105 : cluster [DBG] pgmap v57: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-10T13:44:26.466 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:44:25 vm00 bash[20748]: cluster 2026-03-10T13:44:24.211132+0000 mgr.a (mgr.14150) 105 : cluster [DBG] pgmap v57: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-10T13:44:28.337 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:44:28 vm08 bash[23387]: cluster 2026-03-10T13:44:26.211388+0000 mgr.a (mgr.14150) 106 : cluster [DBG] pgmap v58: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-10T13:44:28.337 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:44:28 vm08 bash[23387]: cluster 2026-03-10T13:44:26.211388+0000 mgr.a (mgr.14150) 106 : cluster [DBG] pgmap v58: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-10T13:44:28.466 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:44:28 vm00 bash[20748]: cluster 2026-03-10T13:44:26.211388+0000 mgr.a (mgr.14150) 106 : cluster [DBG] pgmap v58: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-10T13:44:28.467 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:44:28 vm00 bash[20748]: cluster 2026-03-10T13:44:26.211388+0000 mgr.a (mgr.14150) 106 : cluster [DBG] pgmap v58: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-10T13:44:28.499 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:44:28 vm07 bash[23044]: cluster 2026-03-10T13:44:26.211388+0000 mgr.a (mgr.14150) 106 : cluster [DBG] pgmap v58: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-10T13:44:28.499 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:44:28 vm07 bash[23044]: cluster 2026-03-10T13:44:26.211388+0000 mgr.a (mgr.14150) 106 : cluster [DBG] pgmap v58: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-10T13:44:30.337 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:44:30 vm08 bash[23387]: cluster 2026-03-10T13:44:28.211627+0000 mgr.a (mgr.14150) 107 : cluster [DBG] pgmap v59: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-10T13:44:30.337 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:44:30 vm08 bash[23387]: cluster 2026-03-10T13:44:28.211627+0000 mgr.a (mgr.14150) 107 : cluster [DBG] pgmap v59: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-10T13:44:30.424 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:44:30 vm07 bash[23044]: cluster 2026-03-10T13:44:28.211627+0000 mgr.a (mgr.14150) 107 : cluster [DBG] pgmap v59: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-10T13:44:30.425 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:44:30 vm07 bash[23044]: cluster 2026-03-10T13:44:28.211627+0000 mgr.a (mgr.14150) 107 : cluster [DBG] pgmap v59: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-10T13:44:30.466 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:44:30 vm00 bash[20748]: cluster 2026-03-10T13:44:28.211627+0000 mgr.a (mgr.14150) 107 : cluster [DBG] pgmap v59: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-10T13:44:30.466 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:44:30 vm00 bash[20748]: cluster 2026-03-10T13:44:28.211627+0000 mgr.a (mgr.14150) 107 : cluster [DBG] pgmap v59: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-10T13:44:31.021 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:44:31 vm07 bash[23044]: audit 2026-03-10T13:44:30.805559+0000 mon.a (mon.0) 333 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "osd.1"}]: dispatch 2026-03-10T13:44:31.021 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:44:31 vm07 bash[23044]: audit 2026-03-10T13:44:30.805559+0000 mon.a (mon.0) 333 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "osd.1"}]: dispatch 2026-03-10T13:44:31.021 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:44:31 vm07 bash[23044]: audit 2026-03-10T13:44:30.806096+0000 mon.a (mon.0) 334 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T13:44:31.021 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:44:31 vm07 bash[23044]: audit 2026-03-10T13:44:30.806096+0000 mon.a (mon.0) 334 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T13:44:31.337 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:44:31 vm08 bash[23387]: audit 2026-03-10T13:44:30.805559+0000 mon.a (mon.0) 333 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "osd.1"}]: dispatch 2026-03-10T13:44:31.337 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:44:31 vm08 bash[23387]: audit 2026-03-10T13:44:30.805559+0000 mon.a (mon.0) 333 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "osd.1"}]: dispatch 2026-03-10T13:44:31.337 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:44:31 vm08 bash[23387]: audit 2026-03-10T13:44:30.806096+0000 mon.a (mon.0) 334 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T13:44:31.337 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:44:31 vm08 bash[23387]: audit 2026-03-10T13:44:30.806096+0000 mon.a (mon.0) 334 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T13:44:31.466 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:44:31 vm00 bash[20748]: audit 2026-03-10T13:44:30.805559+0000 mon.a (mon.0) 333 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "osd.1"}]: dispatch 2026-03-10T13:44:31.466 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:44:31 vm00 bash[20748]: audit 2026-03-10T13:44:30.805559+0000 mon.a (mon.0) 333 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "osd.1"}]: dispatch 2026-03-10T13:44:31.466 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:44:31 vm00 bash[20748]: audit 2026-03-10T13:44:30.806096+0000 mon.a (mon.0) 334 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T13:44:31.466 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:44:31 vm00 bash[20748]: audit 2026-03-10T13:44:30.806096+0000 mon.a (mon.0) 334 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T13:44:31.585 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:44:31 vm07 systemd[1]: /etc/systemd/system/ceph-c9620084-1c86-11f1-bcc5-e3fb709eab0a@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T13:44:31.585 INFO:journalctl@ceph.mgr.b.vm07.stdout:Mar 10 13:44:31 vm07 systemd[1]: /etc/systemd/system/ceph-c9620084-1c86-11f1-bcc5-e3fb709eab0a@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T13:44:31.879 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:44:31 vm07 systemd[1]: /etc/systemd/system/ceph-c9620084-1c86-11f1-bcc5-e3fb709eab0a@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T13:44:31.879 INFO:journalctl@ceph.mgr.b.vm07.stdout:Mar 10 13:44:31 vm07 systemd[1]: /etc/systemd/system/ceph-c9620084-1c86-11f1-bcc5-e3fb709eab0a@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T13:44:32.249 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:44:32 vm07 bash[23044]: cluster 2026-03-10T13:44:30.211804+0000 mgr.a (mgr.14150) 108 : cluster [DBG] pgmap v60: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-10T13:44:32.249 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:44:32 vm07 bash[23044]: cluster 2026-03-10T13:44:30.211804+0000 mgr.a (mgr.14150) 108 : cluster [DBG] pgmap v60: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-10T13:44:32.249 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:44:32 vm07 bash[23044]: cephadm 2026-03-10T13:44:30.806516+0000 mgr.a (mgr.14150) 109 : cephadm [INF] Deploying daemon osd.1 on vm07 2026-03-10T13:44:32.249 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:44:32 vm07 bash[23044]: cephadm 2026-03-10T13:44:30.806516+0000 mgr.a (mgr.14150) 109 : cephadm [INF] Deploying daemon osd.1 on vm07 2026-03-10T13:44:32.249 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:44:32 vm07 bash[23044]: audit 2026-03-10T13:44:31.818977+0000 mon.a (mon.0) 335 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T13:44:32.249 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:44:32 vm07 bash[23044]: audit 2026-03-10T13:44:31.818977+0000 mon.a (mon.0) 335 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T13:44:32.249 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:44:32 vm07 bash[23044]: audit 2026-03-10T13:44:31.823566+0000 mon.a (mon.0) 336 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:44:32.249 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:44:32 vm07 bash[23044]: audit 2026-03-10T13:44:31.823566+0000 mon.a (mon.0) 336 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:44:32.249 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:44:32 vm07 bash[23044]: audit 2026-03-10T13:44:31.827448+0000 mon.a (mon.0) 337 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:44:32.249 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:44:32 vm07 bash[23044]: audit 2026-03-10T13:44:31.827448+0000 mon.a (mon.0) 337 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:44:32.337 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:44:32 vm08 bash[23387]: cluster 2026-03-10T13:44:30.211804+0000 mgr.a (mgr.14150) 108 : cluster [DBG] pgmap v60: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-10T13:44:32.337 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:44:32 vm08 bash[23387]: cluster 2026-03-10T13:44:30.211804+0000 mgr.a (mgr.14150) 108 : cluster [DBG] pgmap v60: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-10T13:44:32.337 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:44:32 vm08 bash[23387]: cephadm 2026-03-10T13:44:30.806516+0000 mgr.a (mgr.14150) 109 : cephadm [INF] Deploying daemon osd.1 on vm07 2026-03-10T13:44:32.337 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:44:32 vm08 bash[23387]: cephadm 2026-03-10T13:44:30.806516+0000 mgr.a (mgr.14150) 109 : cephadm [INF] Deploying daemon osd.1 on vm07 2026-03-10T13:44:32.338 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:44:32 vm08 bash[23387]: audit 2026-03-10T13:44:31.818977+0000 mon.a (mon.0) 335 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T13:44:32.338 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:44:32 vm08 bash[23387]: audit 2026-03-10T13:44:31.818977+0000 mon.a (mon.0) 335 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T13:44:32.338 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:44:32 vm08 bash[23387]: audit 2026-03-10T13:44:31.823566+0000 mon.a (mon.0) 336 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:44:32.338 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:44:32 vm08 bash[23387]: audit 2026-03-10T13:44:31.823566+0000 mon.a (mon.0) 336 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:44:32.338 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:44:32 vm08 bash[23387]: audit 2026-03-10T13:44:31.827448+0000 mon.a (mon.0) 337 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:44:32.338 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:44:32 vm08 bash[23387]: audit 2026-03-10T13:44:31.827448+0000 mon.a (mon.0) 337 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:44:32.466 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:44:32 vm00 bash[20748]: cluster 2026-03-10T13:44:30.211804+0000 mgr.a (mgr.14150) 108 : cluster [DBG] pgmap v60: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-10T13:44:32.467 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:44:32 vm00 bash[20748]: cluster 2026-03-10T13:44:30.211804+0000 mgr.a (mgr.14150) 108 : cluster [DBG] pgmap v60: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-10T13:44:32.467 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:44:32 vm00 bash[20748]: cephadm 2026-03-10T13:44:30.806516+0000 mgr.a (mgr.14150) 109 : cephadm [INF] Deploying daemon osd.1 on vm07 2026-03-10T13:44:32.467 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:44:32 vm00 bash[20748]: cephadm 2026-03-10T13:44:30.806516+0000 mgr.a (mgr.14150) 109 : cephadm [INF] Deploying daemon osd.1 on vm07 2026-03-10T13:44:32.467 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:44:32 vm00 bash[20748]: audit 2026-03-10T13:44:31.818977+0000 mon.a (mon.0) 335 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T13:44:32.467 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:44:32 vm00 bash[20748]: audit 2026-03-10T13:44:31.818977+0000 mon.a (mon.0) 335 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T13:44:32.467 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:44:32 vm00 bash[20748]: audit 2026-03-10T13:44:31.823566+0000 mon.a (mon.0) 336 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:44:32.467 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:44:32 vm00 bash[20748]: audit 2026-03-10T13:44:31.823566+0000 mon.a (mon.0) 336 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:44:32.467 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:44:32 vm00 bash[20748]: audit 2026-03-10T13:44:31.827448+0000 mon.a (mon.0) 337 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:44:32.467 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:44:32 vm00 bash[20748]: audit 2026-03-10T13:44:31.827448+0000 mon.a (mon.0) 337 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:44:34.337 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:44:34 vm08 bash[23387]: cluster 2026-03-10T13:44:32.212011+0000 mgr.a (mgr.14150) 110 : cluster [DBG] pgmap v61: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-10T13:44:34.337 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:44:34 vm08 bash[23387]: cluster 2026-03-10T13:44:32.212011+0000 mgr.a (mgr.14150) 110 : cluster [DBG] pgmap v61: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-10T13:44:34.466 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:44:34 vm00 bash[20748]: cluster 2026-03-10T13:44:32.212011+0000 mgr.a (mgr.14150) 110 : cluster [DBG] pgmap v61: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-10T13:44:34.466 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:44:34 vm00 bash[20748]: cluster 2026-03-10T13:44:32.212011+0000 mgr.a (mgr.14150) 110 : cluster [DBG] pgmap v61: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-10T13:44:34.499 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:44:34 vm07 bash[23044]: cluster 2026-03-10T13:44:32.212011+0000 mgr.a (mgr.14150) 110 : cluster [DBG] pgmap v61: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-10T13:44:34.499 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:44:34 vm07 bash[23044]: cluster 2026-03-10T13:44:32.212011+0000 mgr.a (mgr.14150) 110 : cluster [DBG] pgmap v61: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-10T13:44:36.337 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:44:36 vm08 bash[23387]: cluster 2026-03-10T13:44:34.212249+0000 mgr.a (mgr.14150) 111 : cluster [DBG] pgmap v62: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-10T13:44:36.337 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:44:36 vm08 bash[23387]: cluster 2026-03-10T13:44:34.212249+0000 mgr.a (mgr.14150) 111 : cluster [DBG] pgmap v62: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-10T13:44:36.337 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:44:36 vm08 bash[23387]: audit 2026-03-10T13:44:35.094633+0000 mon.a (mon.0) 338 : audit [INF] from='osd.1 [v2:192.168.123.107:6800/2145894062,v1:192.168.123.107:6801/2145894062]' entity='osd.1' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]: dispatch 2026-03-10T13:44:36.337 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:44:36 vm08 bash[23387]: audit 2026-03-10T13:44:35.094633+0000 mon.a (mon.0) 338 : audit [INF] from='osd.1 [v2:192.168.123.107:6800/2145894062,v1:192.168.123.107:6801/2145894062]' entity='osd.1' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]: dispatch 2026-03-10T13:44:36.466 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:44:36 vm00 bash[20748]: cluster 2026-03-10T13:44:34.212249+0000 mgr.a (mgr.14150) 111 : cluster [DBG] pgmap v62: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-10T13:44:36.467 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:44:36 vm00 bash[20748]: cluster 2026-03-10T13:44:34.212249+0000 mgr.a (mgr.14150) 111 : cluster [DBG] pgmap v62: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-10T13:44:36.467 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:44:36 vm00 bash[20748]: audit 2026-03-10T13:44:35.094633+0000 mon.a (mon.0) 338 : audit [INF] from='osd.1 [v2:192.168.123.107:6800/2145894062,v1:192.168.123.107:6801/2145894062]' entity='osd.1' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]: dispatch 2026-03-10T13:44:36.467 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:44:36 vm00 bash[20748]: audit 2026-03-10T13:44:35.094633+0000 mon.a (mon.0) 338 : audit [INF] from='osd.1 [v2:192.168.123.107:6800/2145894062,v1:192.168.123.107:6801/2145894062]' entity='osd.1' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]: dispatch 2026-03-10T13:44:36.499 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:44:36 vm07 bash[23044]: cluster 2026-03-10T13:44:34.212249+0000 mgr.a (mgr.14150) 111 : cluster [DBG] pgmap v62: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-10T13:44:36.499 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:44:36 vm07 bash[23044]: cluster 2026-03-10T13:44:34.212249+0000 mgr.a (mgr.14150) 111 : cluster [DBG] pgmap v62: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-10T13:44:36.499 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:44:36 vm07 bash[23044]: audit 2026-03-10T13:44:35.094633+0000 mon.a (mon.0) 338 : audit [INF] from='osd.1 [v2:192.168.123.107:6800/2145894062,v1:192.168.123.107:6801/2145894062]' entity='osd.1' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]: dispatch 2026-03-10T13:44:36.499 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:44:36 vm07 bash[23044]: audit 2026-03-10T13:44:35.094633+0000 mon.a (mon.0) 338 : audit [INF] from='osd.1 [v2:192.168.123.107:6800/2145894062,v1:192.168.123.107:6801/2145894062]' entity='osd.1' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]: dispatch 2026-03-10T13:44:37.337 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:44:37 vm08 bash[23387]: audit 2026-03-10T13:44:36.052475+0000 mon.a (mon.0) 339 : audit [INF] from='osd.1 [v2:192.168.123.107:6800/2145894062,v1:192.168.123.107:6801/2145894062]' entity='osd.1' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]': finished 2026-03-10T13:44:37.337 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:44:37 vm08 bash[23387]: audit 2026-03-10T13:44:36.052475+0000 mon.a (mon.0) 339 : audit [INF] from='osd.1 [v2:192.168.123.107:6800/2145894062,v1:192.168.123.107:6801/2145894062]' entity='osd.1' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]': finished 2026-03-10T13:44:37.337 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:44:37 vm08 bash[23387]: cluster 2026-03-10T13:44:36.057540+0000 mon.a (mon.0) 340 : cluster [DBG] osdmap e11: 2 total, 1 up, 2 in 2026-03-10T13:44:37.337 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:44:37 vm08 bash[23387]: cluster 2026-03-10T13:44:36.057540+0000 mon.a (mon.0) 340 : cluster [DBG] osdmap e11: 2 total, 1 up, 2 in 2026-03-10T13:44:37.337 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:44:37 vm08 bash[23387]: audit 2026-03-10T13:44:36.068123+0000 mon.a (mon.0) 341 : audit [INF] from='osd.1 [v2:192.168.123.107:6800/2145894062,v1:192.168.123.107:6801/2145894062]' entity='osd.1' cmd=[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=vm07", "root=default"]}]: dispatch 2026-03-10T13:44:37.337 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:44:37 vm08 bash[23387]: audit 2026-03-10T13:44:36.068123+0000 mon.a (mon.0) 341 : audit [INF] from='osd.1 [v2:192.168.123.107:6800/2145894062,v1:192.168.123.107:6801/2145894062]' entity='osd.1' cmd=[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=vm07", "root=default"]}]: dispatch 2026-03-10T13:44:37.337 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:44:37 vm08 bash[23387]: audit 2026-03-10T13:44:36.068250+0000 mon.a (mon.0) 342 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T13:44:37.337 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:44:37 vm08 bash[23387]: audit 2026-03-10T13:44:36.068250+0000 mon.a (mon.0) 342 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T13:44:37.338 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:44:37 vm08 bash[23387]: cluster 2026-03-10T13:44:36.212480+0000 mgr.a (mgr.14150) 112 : cluster [DBG] pgmap v64: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-10T13:44:37.338 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:44:37 vm08 bash[23387]: cluster 2026-03-10T13:44:36.212480+0000 mgr.a (mgr.14150) 112 : cluster [DBG] pgmap v64: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-10T13:44:37.338 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:44:37 vm08 bash[23387]: audit 2026-03-10T13:44:37.055726+0000 mon.a (mon.0) 343 : audit [INF] from='osd.1 [v2:192.168.123.107:6800/2145894062,v1:192.168.123.107:6801/2145894062]' entity='osd.1' cmd='[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=vm07", "root=default"]}]': finished 2026-03-10T13:44:37.338 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:44:37 vm08 bash[23387]: audit 2026-03-10T13:44:37.055726+0000 mon.a (mon.0) 343 : audit [INF] from='osd.1 [v2:192.168.123.107:6800/2145894062,v1:192.168.123.107:6801/2145894062]' entity='osd.1' cmd='[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=vm07", "root=default"]}]': finished 2026-03-10T13:44:37.338 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:44:37 vm08 bash[23387]: cluster 2026-03-10T13:44:37.058969+0000 mon.a (mon.0) 344 : cluster [DBG] osdmap e12: 2 total, 1 up, 2 in 2026-03-10T13:44:37.338 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:44:37 vm08 bash[23387]: cluster 2026-03-10T13:44:37.058969+0000 mon.a (mon.0) 344 : cluster [DBG] osdmap e12: 2 total, 1 up, 2 in 2026-03-10T13:44:37.338 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:44:37 vm08 bash[23387]: audit 2026-03-10T13:44:37.059092+0000 mon.a (mon.0) 345 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T13:44:37.338 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:44:37 vm08 bash[23387]: audit 2026-03-10T13:44:37.059092+0000 mon.a (mon.0) 345 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T13:44:37.338 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:44:37 vm08 bash[23387]: audit 2026-03-10T13:44:37.062534+0000 mon.a (mon.0) 346 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T13:44:37.338 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:44:37 vm08 bash[23387]: audit 2026-03-10T13:44:37.062534+0000 mon.a (mon.0) 346 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T13:44:37.466 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:44:37 vm00 bash[20748]: audit 2026-03-10T13:44:36.052475+0000 mon.a (mon.0) 339 : audit [INF] from='osd.1 [v2:192.168.123.107:6800/2145894062,v1:192.168.123.107:6801/2145894062]' entity='osd.1' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]': finished 2026-03-10T13:44:37.466 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:44:37 vm00 bash[20748]: audit 2026-03-10T13:44:36.052475+0000 mon.a (mon.0) 339 : audit [INF] from='osd.1 [v2:192.168.123.107:6800/2145894062,v1:192.168.123.107:6801/2145894062]' entity='osd.1' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]': finished 2026-03-10T13:44:37.466 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:44:37 vm00 bash[20748]: cluster 2026-03-10T13:44:36.057540+0000 mon.a (mon.0) 340 : cluster [DBG] osdmap e11: 2 total, 1 up, 2 in 2026-03-10T13:44:37.466 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:44:37 vm00 bash[20748]: cluster 2026-03-10T13:44:36.057540+0000 mon.a (mon.0) 340 : cluster [DBG] osdmap e11: 2 total, 1 up, 2 in 2026-03-10T13:44:37.466 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:44:37 vm00 bash[20748]: audit 2026-03-10T13:44:36.068123+0000 mon.a (mon.0) 341 : audit [INF] from='osd.1 [v2:192.168.123.107:6800/2145894062,v1:192.168.123.107:6801/2145894062]' entity='osd.1' cmd=[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=vm07", "root=default"]}]: dispatch 2026-03-10T13:44:37.467 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:44:37 vm00 bash[20748]: audit 2026-03-10T13:44:36.068123+0000 mon.a (mon.0) 341 : audit [INF] from='osd.1 [v2:192.168.123.107:6800/2145894062,v1:192.168.123.107:6801/2145894062]' entity='osd.1' cmd=[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=vm07", "root=default"]}]: dispatch 2026-03-10T13:44:37.467 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:44:37 vm00 bash[20748]: audit 2026-03-10T13:44:36.068250+0000 mon.a (mon.0) 342 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T13:44:37.467 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:44:37 vm00 bash[20748]: audit 2026-03-10T13:44:36.068250+0000 mon.a (mon.0) 342 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T13:44:37.467 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:44:37 vm00 bash[20748]: cluster 2026-03-10T13:44:36.212480+0000 mgr.a (mgr.14150) 112 : cluster [DBG] pgmap v64: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-10T13:44:37.467 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:44:37 vm00 bash[20748]: cluster 2026-03-10T13:44:36.212480+0000 mgr.a (mgr.14150) 112 : cluster [DBG] pgmap v64: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-10T13:44:37.467 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:44:37 vm00 bash[20748]: audit 2026-03-10T13:44:37.055726+0000 mon.a (mon.0) 343 : audit [INF] from='osd.1 [v2:192.168.123.107:6800/2145894062,v1:192.168.123.107:6801/2145894062]' entity='osd.1' cmd='[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=vm07", "root=default"]}]': finished 2026-03-10T13:44:37.467 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:44:37 vm00 bash[20748]: audit 2026-03-10T13:44:37.055726+0000 mon.a (mon.0) 343 : audit [INF] from='osd.1 [v2:192.168.123.107:6800/2145894062,v1:192.168.123.107:6801/2145894062]' entity='osd.1' cmd='[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=vm07", "root=default"]}]': finished 2026-03-10T13:44:37.467 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:44:37 vm00 bash[20748]: cluster 2026-03-10T13:44:37.058969+0000 mon.a (mon.0) 344 : cluster [DBG] osdmap e12: 2 total, 1 up, 2 in 2026-03-10T13:44:37.467 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:44:37 vm00 bash[20748]: cluster 2026-03-10T13:44:37.058969+0000 mon.a (mon.0) 344 : cluster [DBG] osdmap e12: 2 total, 1 up, 2 in 2026-03-10T13:44:37.467 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:44:37 vm00 bash[20748]: audit 2026-03-10T13:44:37.059092+0000 mon.a (mon.0) 345 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T13:44:37.467 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:44:37 vm00 bash[20748]: audit 2026-03-10T13:44:37.059092+0000 mon.a (mon.0) 345 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T13:44:37.467 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:44:37 vm00 bash[20748]: audit 2026-03-10T13:44:37.062534+0000 mon.a (mon.0) 346 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T13:44:37.467 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:44:37 vm00 bash[20748]: audit 2026-03-10T13:44:37.062534+0000 mon.a (mon.0) 346 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T13:44:37.499 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:44:37 vm07 bash[23044]: audit 2026-03-10T13:44:36.052475+0000 mon.a (mon.0) 339 : audit [INF] from='osd.1 [v2:192.168.123.107:6800/2145894062,v1:192.168.123.107:6801/2145894062]' entity='osd.1' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]': finished 2026-03-10T13:44:37.499 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:44:37 vm07 bash[23044]: audit 2026-03-10T13:44:36.052475+0000 mon.a (mon.0) 339 : audit [INF] from='osd.1 [v2:192.168.123.107:6800/2145894062,v1:192.168.123.107:6801/2145894062]' entity='osd.1' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]': finished 2026-03-10T13:44:37.499 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:44:37 vm07 bash[23044]: cluster 2026-03-10T13:44:36.057540+0000 mon.a (mon.0) 340 : cluster [DBG] osdmap e11: 2 total, 1 up, 2 in 2026-03-10T13:44:37.499 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:44:37 vm07 bash[23044]: cluster 2026-03-10T13:44:36.057540+0000 mon.a (mon.0) 340 : cluster [DBG] osdmap e11: 2 total, 1 up, 2 in 2026-03-10T13:44:37.499 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:44:37 vm07 bash[23044]: audit 2026-03-10T13:44:36.068123+0000 mon.a (mon.0) 341 : audit [INF] from='osd.1 [v2:192.168.123.107:6800/2145894062,v1:192.168.123.107:6801/2145894062]' entity='osd.1' cmd=[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=vm07", "root=default"]}]: dispatch 2026-03-10T13:44:37.499 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:44:37 vm07 bash[23044]: audit 2026-03-10T13:44:36.068123+0000 mon.a (mon.0) 341 : audit [INF] from='osd.1 [v2:192.168.123.107:6800/2145894062,v1:192.168.123.107:6801/2145894062]' entity='osd.1' cmd=[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=vm07", "root=default"]}]: dispatch 2026-03-10T13:44:37.499 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:44:37 vm07 bash[23044]: audit 2026-03-10T13:44:36.068250+0000 mon.a (mon.0) 342 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T13:44:37.499 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:44:37 vm07 bash[23044]: audit 2026-03-10T13:44:36.068250+0000 mon.a (mon.0) 342 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T13:44:37.499 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:44:37 vm07 bash[23044]: cluster 2026-03-10T13:44:36.212480+0000 mgr.a (mgr.14150) 112 : cluster [DBG] pgmap v64: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-10T13:44:37.499 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:44:37 vm07 bash[23044]: cluster 2026-03-10T13:44:36.212480+0000 mgr.a (mgr.14150) 112 : cluster [DBG] pgmap v64: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-10T13:44:37.499 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:44:37 vm07 bash[23044]: audit 2026-03-10T13:44:37.055726+0000 mon.a (mon.0) 343 : audit [INF] from='osd.1 [v2:192.168.123.107:6800/2145894062,v1:192.168.123.107:6801/2145894062]' entity='osd.1' cmd='[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=vm07", "root=default"]}]': finished 2026-03-10T13:44:37.499 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:44:37 vm07 bash[23044]: audit 2026-03-10T13:44:37.055726+0000 mon.a (mon.0) 343 : audit [INF] from='osd.1 [v2:192.168.123.107:6800/2145894062,v1:192.168.123.107:6801/2145894062]' entity='osd.1' cmd='[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=vm07", "root=default"]}]': finished 2026-03-10T13:44:37.499 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:44:37 vm07 bash[23044]: cluster 2026-03-10T13:44:37.058969+0000 mon.a (mon.0) 344 : cluster [DBG] osdmap e12: 2 total, 1 up, 2 in 2026-03-10T13:44:37.499 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:44:37 vm07 bash[23044]: cluster 2026-03-10T13:44:37.058969+0000 mon.a (mon.0) 344 : cluster [DBG] osdmap e12: 2 total, 1 up, 2 in 2026-03-10T13:44:37.499 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:44:37 vm07 bash[23044]: audit 2026-03-10T13:44:37.059092+0000 mon.a (mon.0) 345 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T13:44:37.499 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:44:37 vm07 bash[23044]: audit 2026-03-10T13:44:37.059092+0000 mon.a (mon.0) 345 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T13:44:37.500 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:44:37 vm07 bash[23044]: audit 2026-03-10T13:44:37.062534+0000 mon.a (mon.0) 346 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T13:44:37.500 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:44:37 vm07 bash[23044]: audit 2026-03-10T13:44:37.062534+0000 mon.a (mon.0) 346 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T13:44:39.148 INFO:teuthology.orchestra.run.vm07.stdout:Created osd(s) 1 on host 'vm07' 2026-03-10T13:44:39.213 DEBUG:teuthology.orchestra.run.vm07:osd.1> sudo journalctl -f -n 0 -u ceph-c9620084-1c86-11f1-bcc5-e3fb709eab0a@osd.1.service 2026-03-10T13:44:39.214 INFO:tasks.cephadm:Deploying osd.2 on vm08 with /dev/vde... 2026-03-10T13:44:39.214 DEBUG:teuthology.orchestra.run.vm08:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df ceph-volume -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid c9620084-1c86-11f1-bcc5-e3fb709eab0a -- lvm zap /dev/vde 2026-03-10T13:44:39.216 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:44:38 vm00 bash[20748]: cluster 2026-03-10T13:44:36.117029+0000 osd.1 (osd.1) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-10T13:44:39.216 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:44:38 vm00 bash[20748]: cluster 2026-03-10T13:44:36.117029+0000 osd.1 (osd.1) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-10T13:44:39.216 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:44:38 vm00 bash[20748]: cluster 2026-03-10T13:44:36.117099+0000 osd.1 (osd.1) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-10T13:44:39.216 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:44:38 vm00 bash[20748]: cluster 2026-03-10T13:44:36.117099+0000 osd.1 (osd.1) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-10T13:44:39.216 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:44:38 vm00 bash[20748]: audit 2026-03-10T13:44:37.961259+0000 mon.a (mon.0) 347 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:44:39.216 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:44:38 vm00 bash[20748]: audit 2026-03-10T13:44:37.961259+0000 mon.a (mon.0) 347 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:44:39.216 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:44:38 vm00 bash[20748]: audit 2026-03-10T13:44:37.989502+0000 mon.a (mon.0) 348 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:44:39.216 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:44:38 vm00 bash[20748]: audit 2026-03-10T13:44:37.989502+0000 mon.a (mon.0) 348 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:44:39.217 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:44:38 vm00 bash[20748]: audit 2026-03-10T13:44:37.990258+0000 mon.a (mon.0) 349 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T13:44:39.217 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:44:38 vm00 bash[20748]: audit 2026-03-10T13:44:37.990258+0000 mon.a (mon.0) 349 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T13:44:39.217 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:44:38 vm00 bash[20748]: audit 2026-03-10T13:44:37.990690+0000 mon.a (mon.0) 350 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T13:44:39.217 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:44:38 vm00 bash[20748]: audit 2026-03-10T13:44:37.990690+0000 mon.a (mon.0) 350 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T13:44:39.217 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:44:38 vm00 bash[20748]: audit 2026-03-10T13:44:38.013848+0000 mon.a (mon.0) 351 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:44:39.217 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:44:38 vm00 bash[20748]: audit 2026-03-10T13:44:38.013848+0000 mon.a (mon.0) 351 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:44:39.217 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:44:38 vm00 bash[20748]: audit 2026-03-10T13:44:38.062205+0000 mon.a (mon.0) 352 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T13:44:39.217 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:44:38 vm00 bash[20748]: audit 2026-03-10T13:44:38.062205+0000 mon.a (mon.0) 352 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T13:44:39.217 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:44:38 vm00 bash[20748]: audit 2026-03-10T13:44:38.717366+0000 mon.a (mon.0) 353 : audit [INF] from='osd.1 [v2:192.168.123.107:6800/2145894062,v1:192.168.123.107:6801/2145894062]' entity='osd.1' 2026-03-10T13:44:39.217 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:44:38 vm00 bash[20748]: audit 2026-03-10T13:44:38.717366+0000 mon.a (mon.0) 353 : audit [INF] from='osd.1 [v2:192.168.123.107:6800/2145894062,v1:192.168.123.107:6801/2145894062]' entity='osd.1' 2026-03-10T13:44:39.219 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:44:38 vm07 bash[23044]: cluster 2026-03-10T13:44:36.117029+0000 osd.1 (osd.1) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-10T13:44:39.219 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:44:38 vm07 bash[23044]: cluster 2026-03-10T13:44:36.117029+0000 osd.1 (osd.1) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-10T13:44:39.219 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:44:38 vm07 bash[23044]: cluster 2026-03-10T13:44:36.117099+0000 osd.1 (osd.1) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-10T13:44:39.219 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:44:38 vm07 bash[23044]: cluster 2026-03-10T13:44:36.117099+0000 osd.1 (osd.1) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-10T13:44:39.219 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:44:38 vm07 bash[23044]: audit 2026-03-10T13:44:37.961259+0000 mon.a (mon.0) 347 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:44:39.219 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:44:38 vm07 bash[23044]: audit 2026-03-10T13:44:37.961259+0000 mon.a (mon.0) 347 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:44:39.219 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:44:38 vm07 bash[23044]: audit 2026-03-10T13:44:37.989502+0000 mon.a (mon.0) 348 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:44:39.219 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:44:38 vm07 bash[23044]: audit 2026-03-10T13:44:37.989502+0000 mon.a (mon.0) 348 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:44:39.219 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:44:38 vm07 bash[23044]: audit 2026-03-10T13:44:37.990258+0000 mon.a (mon.0) 349 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T13:44:39.219 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:44:38 vm07 bash[23044]: audit 2026-03-10T13:44:37.990258+0000 mon.a (mon.0) 349 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T13:44:39.219 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:44:38 vm07 bash[23044]: audit 2026-03-10T13:44:37.990690+0000 mon.a (mon.0) 350 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T13:44:39.219 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:44:38 vm07 bash[23044]: audit 2026-03-10T13:44:37.990690+0000 mon.a (mon.0) 350 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T13:44:39.219 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:44:38 vm07 bash[23044]: audit 2026-03-10T13:44:38.013848+0000 mon.a (mon.0) 351 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:44:39.219 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:44:38 vm07 bash[23044]: audit 2026-03-10T13:44:38.013848+0000 mon.a (mon.0) 351 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:44:39.219 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:44:38 vm07 bash[23044]: audit 2026-03-10T13:44:38.062205+0000 mon.a (mon.0) 352 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T13:44:39.219 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:44:38 vm07 bash[23044]: audit 2026-03-10T13:44:38.062205+0000 mon.a (mon.0) 352 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T13:44:39.219 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:44:38 vm07 bash[23044]: audit 2026-03-10T13:44:38.717366+0000 mon.a (mon.0) 353 : audit [INF] from='osd.1 [v2:192.168.123.107:6800/2145894062,v1:192.168.123.107:6801/2145894062]' entity='osd.1' 2026-03-10T13:44:39.219 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:44:38 vm07 bash[23044]: audit 2026-03-10T13:44:38.717366+0000 mon.a (mon.0) 353 : audit [INF] from='osd.1 [v2:192.168.123.107:6800/2145894062,v1:192.168.123.107:6801/2145894062]' entity='osd.1' 2026-03-10T13:44:39.221 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:44:38 vm08 bash[23387]: cluster 2026-03-10T13:44:36.117029+0000 osd.1 (osd.1) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-10T13:44:39.221 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:44:38 vm08 bash[23387]: cluster 2026-03-10T13:44:36.117029+0000 osd.1 (osd.1) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-10T13:44:39.221 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:44:38 vm08 bash[23387]: cluster 2026-03-10T13:44:36.117099+0000 osd.1 (osd.1) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-10T13:44:39.221 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:44:38 vm08 bash[23387]: cluster 2026-03-10T13:44:36.117099+0000 osd.1 (osd.1) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-10T13:44:39.221 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:44:38 vm08 bash[23387]: audit 2026-03-10T13:44:37.961259+0000 mon.a (mon.0) 347 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:44:39.221 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:44:38 vm08 bash[23387]: audit 2026-03-10T13:44:37.961259+0000 mon.a (mon.0) 347 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:44:39.221 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:44:38 vm08 bash[23387]: audit 2026-03-10T13:44:37.989502+0000 mon.a (mon.0) 348 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:44:39.221 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:44:38 vm08 bash[23387]: audit 2026-03-10T13:44:37.989502+0000 mon.a (mon.0) 348 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:44:39.221 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:44:38 vm08 bash[23387]: audit 2026-03-10T13:44:37.990258+0000 mon.a (mon.0) 349 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T13:44:39.221 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:44:38 vm08 bash[23387]: audit 2026-03-10T13:44:37.990258+0000 mon.a (mon.0) 349 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T13:44:39.221 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:44:38 vm08 bash[23387]: audit 2026-03-10T13:44:37.990690+0000 mon.a (mon.0) 350 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T13:44:39.221 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:44:38 vm08 bash[23387]: audit 2026-03-10T13:44:37.990690+0000 mon.a (mon.0) 350 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T13:44:39.221 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:44:38 vm08 bash[23387]: audit 2026-03-10T13:44:38.013848+0000 mon.a (mon.0) 351 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:44:39.221 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:44:38 vm08 bash[23387]: audit 2026-03-10T13:44:38.013848+0000 mon.a (mon.0) 351 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:44:39.221 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:44:38 vm08 bash[23387]: audit 2026-03-10T13:44:38.062205+0000 mon.a (mon.0) 352 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T13:44:39.221 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:44:38 vm08 bash[23387]: audit 2026-03-10T13:44:38.062205+0000 mon.a (mon.0) 352 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T13:44:39.221 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:44:38 vm08 bash[23387]: audit 2026-03-10T13:44:38.717366+0000 mon.a (mon.0) 353 : audit [INF] from='osd.1 [v2:192.168.123.107:6800/2145894062,v1:192.168.123.107:6801/2145894062]' entity='osd.1' 2026-03-10T13:44:39.221 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:44:38 vm08 bash[23387]: audit 2026-03-10T13:44:38.717366+0000 mon.a (mon.0) 353 : audit [INF] from='osd.1 [v2:192.168.123.107:6800/2145894062,v1:192.168.123.107:6801/2145894062]' entity='osd.1' 2026-03-10T13:44:40.337 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:44:40 vm08 bash[23387]: cluster 2026-03-10T13:44:38.212716+0000 mgr.a (mgr.14150) 113 : cluster [DBG] pgmap v66: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-10T13:44:40.337 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:44:40 vm08 bash[23387]: cluster 2026-03-10T13:44:38.212716+0000 mgr.a (mgr.14150) 113 : cluster [DBG] pgmap v66: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-10T13:44:40.337 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:44:40 vm08 bash[23387]: audit 2026-03-10T13:44:39.062360+0000 mon.a (mon.0) 354 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T13:44:40.337 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:44:40 vm08 bash[23387]: audit 2026-03-10T13:44:39.062360+0000 mon.a (mon.0) 354 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T13:44:40.337 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:44:40 vm08 bash[23387]: audit 2026-03-10T13:44:39.136224+0000 mon.a (mon.0) 355 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T13:44:40.337 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:44:40 vm08 bash[23387]: audit 2026-03-10T13:44:39.136224+0000 mon.a (mon.0) 355 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T13:44:40.338 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:44:40 vm08 bash[23387]: audit 2026-03-10T13:44:39.140733+0000 mon.a (mon.0) 356 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:44:40.338 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:44:40 vm08 bash[23387]: audit 2026-03-10T13:44:39.140733+0000 mon.a (mon.0) 356 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:44:40.338 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:44:40 vm08 bash[23387]: audit 2026-03-10T13:44:39.144968+0000 mon.a (mon.0) 357 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:44:40.338 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:44:40 vm08 bash[23387]: audit 2026-03-10T13:44:39.144968+0000 mon.a (mon.0) 357 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:44:40.338 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:44:40 vm08 bash[23387]: cluster 2026-03-10T13:44:39.722316+0000 mon.a (mon.0) 358 : cluster [INF] osd.1 [v2:192.168.123.107:6800/2145894062,v1:192.168.123.107:6801/2145894062] boot 2026-03-10T13:44:40.338 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:44:40 vm08 bash[23387]: cluster 2026-03-10T13:44:39.722316+0000 mon.a (mon.0) 358 : cluster [INF] osd.1 [v2:192.168.123.107:6800/2145894062,v1:192.168.123.107:6801/2145894062] boot 2026-03-10T13:44:40.338 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:44:40 vm08 bash[23387]: cluster 2026-03-10T13:44:39.722394+0000 mon.a (mon.0) 359 : cluster [DBG] osdmap e13: 2 total, 2 up, 2 in 2026-03-10T13:44:40.338 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:44:40 vm08 bash[23387]: cluster 2026-03-10T13:44:39.722394+0000 mon.a (mon.0) 359 : cluster [DBG] osdmap e13: 2 total, 2 up, 2 in 2026-03-10T13:44:40.338 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:44:40 vm08 bash[23387]: audit 2026-03-10T13:44:39.722524+0000 mon.a (mon.0) 360 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T13:44:40.338 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:44:40 vm08 bash[23387]: audit 2026-03-10T13:44:39.722524+0000 mon.a (mon.0) 360 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T13:44:40.466 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:44:40 vm00 bash[20748]: cluster 2026-03-10T13:44:38.212716+0000 mgr.a (mgr.14150) 113 : cluster [DBG] pgmap v66: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-10T13:44:40.466 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:44:40 vm00 bash[20748]: cluster 2026-03-10T13:44:38.212716+0000 mgr.a (mgr.14150) 113 : cluster [DBG] pgmap v66: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-10T13:44:40.466 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:44:40 vm00 bash[20748]: audit 2026-03-10T13:44:39.062360+0000 mon.a (mon.0) 354 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T13:44:40.467 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:44:40 vm00 bash[20748]: audit 2026-03-10T13:44:39.062360+0000 mon.a (mon.0) 354 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T13:44:40.467 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:44:40 vm00 bash[20748]: audit 2026-03-10T13:44:39.136224+0000 mon.a (mon.0) 355 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T13:44:40.467 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:44:40 vm00 bash[20748]: audit 2026-03-10T13:44:39.136224+0000 mon.a (mon.0) 355 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T13:44:40.467 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:44:40 vm00 bash[20748]: audit 2026-03-10T13:44:39.140733+0000 mon.a (mon.0) 356 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:44:40.467 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:44:40 vm00 bash[20748]: audit 2026-03-10T13:44:39.140733+0000 mon.a (mon.0) 356 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:44:40.467 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:44:40 vm00 bash[20748]: audit 2026-03-10T13:44:39.144968+0000 mon.a (mon.0) 357 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:44:40.467 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:44:40 vm00 bash[20748]: audit 2026-03-10T13:44:39.144968+0000 mon.a (mon.0) 357 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:44:40.467 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:44:40 vm00 bash[20748]: cluster 2026-03-10T13:44:39.722316+0000 mon.a (mon.0) 358 : cluster [INF] osd.1 [v2:192.168.123.107:6800/2145894062,v1:192.168.123.107:6801/2145894062] boot 2026-03-10T13:44:40.467 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:44:40 vm00 bash[20748]: cluster 2026-03-10T13:44:39.722316+0000 mon.a (mon.0) 358 : cluster [INF] osd.1 [v2:192.168.123.107:6800/2145894062,v1:192.168.123.107:6801/2145894062] boot 2026-03-10T13:44:40.467 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:44:40 vm00 bash[20748]: cluster 2026-03-10T13:44:39.722394+0000 mon.a (mon.0) 359 : cluster [DBG] osdmap e13: 2 total, 2 up, 2 in 2026-03-10T13:44:40.467 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:44:40 vm00 bash[20748]: cluster 2026-03-10T13:44:39.722394+0000 mon.a (mon.0) 359 : cluster [DBG] osdmap e13: 2 total, 2 up, 2 in 2026-03-10T13:44:40.467 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:44:40 vm00 bash[20748]: audit 2026-03-10T13:44:39.722524+0000 mon.a (mon.0) 360 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T13:44:40.467 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:44:40 vm00 bash[20748]: audit 2026-03-10T13:44:39.722524+0000 mon.a (mon.0) 360 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T13:44:40.499 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:44:40 vm07 bash[23044]: cluster 2026-03-10T13:44:38.212716+0000 mgr.a (mgr.14150) 113 : cluster [DBG] pgmap v66: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-10T13:44:40.499 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:44:40 vm07 bash[23044]: cluster 2026-03-10T13:44:38.212716+0000 mgr.a (mgr.14150) 113 : cluster [DBG] pgmap v66: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-10T13:44:40.499 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:44:40 vm07 bash[23044]: audit 2026-03-10T13:44:39.062360+0000 mon.a (mon.0) 354 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T13:44:40.499 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:44:40 vm07 bash[23044]: audit 2026-03-10T13:44:39.062360+0000 mon.a (mon.0) 354 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T13:44:40.499 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:44:40 vm07 bash[23044]: audit 2026-03-10T13:44:39.136224+0000 mon.a (mon.0) 355 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T13:44:40.499 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:44:40 vm07 bash[23044]: audit 2026-03-10T13:44:39.136224+0000 mon.a (mon.0) 355 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T13:44:40.499 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:44:40 vm07 bash[23044]: audit 2026-03-10T13:44:39.140733+0000 mon.a (mon.0) 356 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:44:40.499 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:44:40 vm07 bash[23044]: audit 2026-03-10T13:44:39.140733+0000 mon.a (mon.0) 356 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:44:40.499 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:44:40 vm07 bash[23044]: audit 2026-03-10T13:44:39.144968+0000 mon.a (mon.0) 357 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:44:40.499 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:44:40 vm07 bash[23044]: audit 2026-03-10T13:44:39.144968+0000 mon.a (mon.0) 357 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:44:40.499 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:44:40 vm07 bash[23044]: cluster 2026-03-10T13:44:39.722316+0000 mon.a (mon.0) 358 : cluster [INF] osd.1 [v2:192.168.123.107:6800/2145894062,v1:192.168.123.107:6801/2145894062] boot 2026-03-10T13:44:40.499 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:44:40 vm07 bash[23044]: cluster 2026-03-10T13:44:39.722316+0000 mon.a (mon.0) 358 : cluster [INF] osd.1 [v2:192.168.123.107:6800/2145894062,v1:192.168.123.107:6801/2145894062] boot 2026-03-10T13:44:40.499 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:44:40 vm07 bash[23044]: cluster 2026-03-10T13:44:39.722394+0000 mon.a (mon.0) 359 : cluster [DBG] osdmap e13: 2 total, 2 up, 2 in 2026-03-10T13:44:40.499 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:44:40 vm07 bash[23044]: cluster 2026-03-10T13:44:39.722394+0000 mon.a (mon.0) 359 : cluster [DBG] osdmap e13: 2 total, 2 up, 2 in 2026-03-10T13:44:40.499 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:44:40 vm07 bash[23044]: audit 2026-03-10T13:44:39.722524+0000 mon.a (mon.0) 360 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T13:44:40.499 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:44:40 vm07 bash[23044]: audit 2026-03-10T13:44:39.722524+0000 mon.a (mon.0) 360 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T13:44:42.337 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:44:42 vm08 bash[23387]: cluster 2026-03-10T13:44:40.212948+0000 mgr.a (mgr.14150) 114 : cluster [DBG] pgmap v68: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T13:44:42.337 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:44:42 vm08 bash[23387]: cluster 2026-03-10T13:44:40.212948+0000 mgr.a (mgr.14150) 114 : cluster [DBG] pgmap v68: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T13:44:42.337 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:44:42 vm08 bash[23387]: cluster 2026-03-10T13:44:41.027403+0000 mon.a (mon.0) 361 : cluster [DBG] osdmap e14: 2 total, 2 up, 2 in 2026-03-10T13:44:42.337 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:44:42 vm08 bash[23387]: cluster 2026-03-10T13:44:41.027403+0000 mon.a (mon.0) 361 : cluster [DBG] osdmap e14: 2 total, 2 up, 2 in 2026-03-10T13:44:42.466 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:44:42 vm00 bash[20748]: cluster 2026-03-10T13:44:40.212948+0000 mgr.a (mgr.14150) 114 : cluster [DBG] pgmap v68: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T13:44:42.466 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:44:42 vm00 bash[20748]: cluster 2026-03-10T13:44:40.212948+0000 mgr.a (mgr.14150) 114 : cluster [DBG] pgmap v68: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T13:44:42.466 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:44:42 vm00 bash[20748]: cluster 2026-03-10T13:44:41.027403+0000 mon.a (mon.0) 361 : cluster [DBG] osdmap e14: 2 total, 2 up, 2 in 2026-03-10T13:44:42.466 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:44:42 vm00 bash[20748]: cluster 2026-03-10T13:44:41.027403+0000 mon.a (mon.0) 361 : cluster [DBG] osdmap e14: 2 total, 2 up, 2 in 2026-03-10T13:44:42.499 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:44:42 vm07 bash[23044]: cluster 2026-03-10T13:44:40.212948+0000 mgr.a (mgr.14150) 114 : cluster [DBG] pgmap v68: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T13:44:42.499 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:44:42 vm07 bash[23044]: cluster 2026-03-10T13:44:40.212948+0000 mgr.a (mgr.14150) 114 : cluster [DBG] pgmap v68: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T13:44:42.499 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:44:42 vm07 bash[23044]: cluster 2026-03-10T13:44:41.027403+0000 mon.a (mon.0) 361 : cluster [DBG] osdmap e14: 2 total, 2 up, 2 in 2026-03-10T13:44:42.499 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:44:42 vm07 bash[23044]: cluster 2026-03-10T13:44:41.027403+0000 mon.a (mon.0) 361 : cluster [DBG] osdmap e14: 2 total, 2 up, 2 in 2026-03-10T13:44:42.822 INFO:teuthology.orchestra.run.vm08.stderr:Inferring config /var/lib/ceph/c9620084-1c86-11f1-bcc5-e3fb709eab0a/mon.c/config 2026-03-10T13:44:43.087 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:44:43 vm08 bash[23387]: cluster 2026-03-10T13:44:42.213169+0000 mgr.a (mgr.14150) 115 : cluster [DBG] pgmap v70: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T13:44:43.087 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:44:43 vm08 bash[23387]: cluster 2026-03-10T13:44:42.213169+0000 mgr.a (mgr.14150) 115 : cluster [DBG] pgmap v70: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T13:44:43.466 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:44:43 vm00 bash[20748]: cluster 2026-03-10T13:44:42.213169+0000 mgr.a (mgr.14150) 115 : cluster [DBG] pgmap v70: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T13:44:43.466 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:44:43 vm00 bash[20748]: cluster 2026-03-10T13:44:42.213169+0000 mgr.a (mgr.14150) 115 : cluster [DBG] pgmap v70: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T13:44:43.499 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:44:43 vm07 bash[23044]: cluster 2026-03-10T13:44:42.213169+0000 mgr.a (mgr.14150) 115 : cluster [DBG] pgmap v70: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T13:44:43.499 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:44:43 vm07 bash[23044]: cluster 2026-03-10T13:44:42.213169+0000 mgr.a (mgr.14150) 115 : cluster [DBG] pgmap v70: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T13:44:43.617 INFO:teuthology.orchestra.run.vm08.stdout: 2026-03-10T13:44:43.631 DEBUG:teuthology.orchestra.run.vm08:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid c9620084-1c86-11f1-bcc5-e3fb709eab0a -- ceph orch daemon add osd vm08:/dev/vde 2026-03-10T13:44:45.966 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:44:45 vm00 bash[20748]: cluster 2026-03-10T13:44:44.213399+0000 mgr.a (mgr.14150) 116 : cluster [DBG] pgmap v71: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T13:44:45.966 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:44:45 vm00 bash[20748]: cluster 2026-03-10T13:44:44.213399+0000 mgr.a (mgr.14150) 116 : cluster [DBG] pgmap v71: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T13:44:45.966 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:44:45 vm00 bash[20748]: cephadm 2026-03-10T13:44:44.663531+0000 mgr.a (mgr.14150) 117 : cephadm [INF] Detected new or changed devices on vm07 2026-03-10T13:44:45.967 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:44:45 vm00 bash[20748]: cephadm 2026-03-10T13:44:44.663531+0000 mgr.a (mgr.14150) 117 : cephadm [INF] Detected new or changed devices on vm07 2026-03-10T13:44:45.967 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:44:45 vm00 bash[20748]: audit 2026-03-10T13:44:44.668348+0000 mon.a (mon.0) 362 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:44:45.967 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:44:45 vm00 bash[20748]: audit 2026-03-10T13:44:44.668348+0000 mon.a (mon.0) 362 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:44:45.967 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:44:45 vm00 bash[20748]: audit 2026-03-10T13:44:44.672191+0000 mon.a (mon.0) 363 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:44:45.967 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:44:45 vm00 bash[20748]: audit 2026-03-10T13:44:44.672191+0000 mon.a (mon.0) 363 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:44:45.967 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:44:45 vm00 bash[20748]: audit 2026-03-10T13:44:44.672781+0000 mon.a (mon.0) 364 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"}]: dispatch 2026-03-10T13:44:45.967 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:44:45 vm00 bash[20748]: audit 2026-03-10T13:44:44.672781+0000 mon.a (mon.0) 364 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"}]: dispatch 2026-03-10T13:44:45.967 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:44:45 vm00 bash[20748]: cephadm 2026-03-10T13:44:44.673108+0000 mgr.a (mgr.14150) 118 : cephadm [INF] Adjusting osd_memory_target on vm07 to 455.7M 2026-03-10T13:44:45.967 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:44:45 vm00 bash[20748]: cephadm 2026-03-10T13:44:44.673108+0000 mgr.a (mgr.14150) 118 : cephadm [INF] Adjusting osd_memory_target on vm07 to 455.7M 2026-03-10T13:44:45.967 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:44:45 vm00 bash[20748]: cephadm 2026-03-10T13:44:44.673457+0000 mgr.a (mgr.14150) 119 : cephadm [WRN] Unable to set osd_memory_target on vm07 to 477915955: error parsing value: Value '477915955' is below minimum 939524096 2026-03-10T13:44:45.967 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:44:45 vm00 bash[20748]: cephadm 2026-03-10T13:44:44.673457+0000 mgr.a (mgr.14150) 119 : cephadm [WRN] Unable to set osd_memory_target on vm07 to 477915955: error parsing value: Value '477915955' is below minimum 939524096 2026-03-10T13:44:45.967 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:44:45 vm00 bash[20748]: audit 2026-03-10T13:44:44.673698+0000 mon.a (mon.0) 365 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T13:44:45.967 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:44:45 vm00 bash[20748]: audit 2026-03-10T13:44:44.673698+0000 mon.a (mon.0) 365 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T13:44:45.967 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:44:45 vm00 bash[20748]: audit 2026-03-10T13:44:44.674124+0000 mon.a (mon.0) 366 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T13:44:45.967 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:44:45 vm00 bash[20748]: audit 2026-03-10T13:44:44.674124+0000 mon.a (mon.0) 366 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T13:44:45.967 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:44:45 vm00 bash[20748]: audit 2026-03-10T13:44:44.677193+0000 mon.a (mon.0) 367 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:44:45.967 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:44:45 vm00 bash[20748]: audit 2026-03-10T13:44:44.677193+0000 mon.a (mon.0) 367 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:44:45.999 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:44:45 vm07 bash[23044]: cluster 2026-03-10T13:44:44.213399+0000 mgr.a (mgr.14150) 116 : cluster [DBG] pgmap v71: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T13:44:45.999 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:44:45 vm07 bash[23044]: cluster 2026-03-10T13:44:44.213399+0000 mgr.a (mgr.14150) 116 : cluster [DBG] pgmap v71: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T13:44:45.999 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:44:45 vm07 bash[23044]: cephadm 2026-03-10T13:44:44.663531+0000 mgr.a (mgr.14150) 117 : cephadm [INF] Detected new or changed devices on vm07 2026-03-10T13:44:45.999 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:44:45 vm07 bash[23044]: cephadm 2026-03-10T13:44:44.663531+0000 mgr.a (mgr.14150) 117 : cephadm [INF] Detected new or changed devices on vm07 2026-03-10T13:44:45.999 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:44:45 vm07 bash[23044]: audit 2026-03-10T13:44:44.668348+0000 mon.a (mon.0) 362 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:44:45.999 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:44:45 vm07 bash[23044]: audit 2026-03-10T13:44:44.668348+0000 mon.a (mon.0) 362 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:44:45.999 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:44:45 vm07 bash[23044]: audit 2026-03-10T13:44:44.672191+0000 mon.a (mon.0) 363 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:44:45.999 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:44:45 vm07 bash[23044]: audit 2026-03-10T13:44:44.672191+0000 mon.a (mon.0) 363 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:44:45.999 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:44:45 vm07 bash[23044]: audit 2026-03-10T13:44:44.672781+0000 mon.a (mon.0) 364 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"}]: dispatch 2026-03-10T13:44:45.999 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:44:45 vm07 bash[23044]: audit 2026-03-10T13:44:44.672781+0000 mon.a (mon.0) 364 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"}]: dispatch 2026-03-10T13:44:45.999 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:44:45 vm07 bash[23044]: cephadm 2026-03-10T13:44:44.673108+0000 mgr.a (mgr.14150) 118 : cephadm [INF] Adjusting osd_memory_target on vm07 to 455.7M 2026-03-10T13:44:45.999 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:44:45 vm07 bash[23044]: cephadm 2026-03-10T13:44:44.673108+0000 mgr.a (mgr.14150) 118 : cephadm [INF] Adjusting osd_memory_target on vm07 to 455.7M 2026-03-10T13:44:45.999 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:44:45 vm07 bash[23044]: cephadm 2026-03-10T13:44:44.673457+0000 mgr.a (mgr.14150) 119 : cephadm [WRN] Unable to set osd_memory_target on vm07 to 477915955: error parsing value: Value '477915955' is below minimum 939524096 2026-03-10T13:44:45.999 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:44:45 vm07 bash[23044]: cephadm 2026-03-10T13:44:44.673457+0000 mgr.a (mgr.14150) 119 : cephadm [WRN] Unable to set osd_memory_target on vm07 to 477915955: error parsing value: Value '477915955' is below minimum 939524096 2026-03-10T13:44:45.999 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:44:45 vm07 bash[23044]: audit 2026-03-10T13:44:44.673698+0000 mon.a (mon.0) 365 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T13:44:45.999 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:44:45 vm07 bash[23044]: audit 2026-03-10T13:44:44.673698+0000 mon.a (mon.0) 365 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T13:44:45.999 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:44:45 vm07 bash[23044]: audit 2026-03-10T13:44:44.674124+0000 mon.a (mon.0) 366 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T13:44:45.999 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:44:45 vm07 bash[23044]: audit 2026-03-10T13:44:44.674124+0000 mon.a (mon.0) 366 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T13:44:45.999 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:44:45 vm07 bash[23044]: audit 2026-03-10T13:44:44.677193+0000 mon.a (mon.0) 367 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:44:45.999 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:44:45 vm07 bash[23044]: audit 2026-03-10T13:44:44.677193+0000 mon.a (mon.0) 367 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:44:46.087 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:44:45 vm08 bash[23387]: cluster 2026-03-10T13:44:44.213399+0000 mgr.a (mgr.14150) 116 : cluster [DBG] pgmap v71: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T13:44:46.087 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:44:45 vm08 bash[23387]: cluster 2026-03-10T13:44:44.213399+0000 mgr.a (mgr.14150) 116 : cluster [DBG] pgmap v71: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T13:44:46.087 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:44:45 vm08 bash[23387]: cephadm 2026-03-10T13:44:44.663531+0000 mgr.a (mgr.14150) 117 : cephadm [INF] Detected new or changed devices on vm07 2026-03-10T13:44:46.087 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:44:45 vm08 bash[23387]: cephadm 2026-03-10T13:44:44.663531+0000 mgr.a (mgr.14150) 117 : cephadm [INF] Detected new or changed devices on vm07 2026-03-10T13:44:46.087 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:44:45 vm08 bash[23387]: audit 2026-03-10T13:44:44.668348+0000 mon.a (mon.0) 362 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:44:46.087 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:44:45 vm08 bash[23387]: audit 2026-03-10T13:44:44.668348+0000 mon.a (mon.0) 362 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:44:46.087 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:44:45 vm08 bash[23387]: audit 2026-03-10T13:44:44.672191+0000 mon.a (mon.0) 363 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:44:46.087 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:44:45 vm08 bash[23387]: audit 2026-03-10T13:44:44.672191+0000 mon.a (mon.0) 363 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:44:46.087 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:44:45 vm08 bash[23387]: audit 2026-03-10T13:44:44.672781+0000 mon.a (mon.0) 364 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"}]: dispatch 2026-03-10T13:44:46.087 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:44:45 vm08 bash[23387]: audit 2026-03-10T13:44:44.672781+0000 mon.a (mon.0) 364 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"}]: dispatch 2026-03-10T13:44:46.088 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:44:45 vm08 bash[23387]: cephadm 2026-03-10T13:44:44.673108+0000 mgr.a (mgr.14150) 118 : cephadm [INF] Adjusting osd_memory_target on vm07 to 455.7M 2026-03-10T13:44:46.088 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:44:45 vm08 bash[23387]: cephadm 2026-03-10T13:44:44.673108+0000 mgr.a (mgr.14150) 118 : cephadm [INF] Adjusting osd_memory_target on vm07 to 455.7M 2026-03-10T13:44:46.088 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:44:45 vm08 bash[23387]: cephadm 2026-03-10T13:44:44.673457+0000 mgr.a (mgr.14150) 119 : cephadm [WRN] Unable to set osd_memory_target on vm07 to 477915955: error parsing value: Value '477915955' is below minimum 939524096 2026-03-10T13:44:46.088 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:44:45 vm08 bash[23387]: cephadm 2026-03-10T13:44:44.673457+0000 mgr.a (mgr.14150) 119 : cephadm [WRN] Unable to set osd_memory_target on vm07 to 477915955: error parsing value: Value '477915955' is below minimum 939524096 2026-03-10T13:44:46.088 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:44:45 vm08 bash[23387]: audit 2026-03-10T13:44:44.673698+0000 mon.a (mon.0) 365 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T13:44:46.088 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:44:45 vm08 bash[23387]: audit 2026-03-10T13:44:44.673698+0000 mon.a (mon.0) 365 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T13:44:46.088 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:44:45 vm08 bash[23387]: audit 2026-03-10T13:44:44.674124+0000 mon.a (mon.0) 366 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T13:44:46.088 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:44:45 vm08 bash[23387]: audit 2026-03-10T13:44:44.674124+0000 mon.a (mon.0) 366 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T13:44:46.088 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:44:45 vm08 bash[23387]: audit 2026-03-10T13:44:44.677193+0000 mon.a (mon.0) 367 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:44:46.088 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:44:45 vm08 bash[23387]: audit 2026-03-10T13:44:44.677193+0000 mon.a (mon.0) 367 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:44:47.274 INFO:teuthology.orchestra.run.vm08.stderr:Inferring config /var/lib/ceph/c9620084-1c86-11f1-bcc5-e3fb709eab0a/mon.c/config 2026-03-10T13:44:47.966 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:44:47 vm00 bash[20748]: cluster 2026-03-10T13:44:46.213601+0000 mgr.a (mgr.14150) 120 : cluster [DBG] pgmap v72: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T13:44:47.966 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:44:47 vm00 bash[20748]: cluster 2026-03-10T13:44:46.213601+0000 mgr.a (mgr.14150) 120 : cluster [DBG] pgmap v72: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T13:44:47.966 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:44:47 vm00 bash[20748]: audit 2026-03-10T13:44:47.546625+0000 mon.a (mon.0) 368 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-10T13:44:47.966 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:44:47 vm00 bash[20748]: audit 2026-03-10T13:44:47.546625+0000 mon.a (mon.0) 368 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-10T13:44:47.966 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:44:47 vm00 bash[20748]: audit 2026-03-10T13:44:47.547925+0000 mon.a (mon.0) 369 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-10T13:44:47.966 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:44:47 vm00 bash[20748]: audit 2026-03-10T13:44:47.547925+0000 mon.a (mon.0) 369 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-10T13:44:47.966 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:44:47 vm00 bash[20748]: audit 2026-03-10T13:44:47.548371+0000 mon.a (mon.0) 370 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T13:44:47.966 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:44:47 vm00 bash[20748]: audit 2026-03-10T13:44:47.548371+0000 mon.a (mon.0) 370 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T13:44:47.999 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:44:47 vm07 bash[23044]: cluster 2026-03-10T13:44:46.213601+0000 mgr.a (mgr.14150) 120 : cluster [DBG] pgmap v72: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T13:44:47.999 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:44:47 vm07 bash[23044]: cluster 2026-03-10T13:44:46.213601+0000 mgr.a (mgr.14150) 120 : cluster [DBG] pgmap v72: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T13:44:47.999 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:44:47 vm07 bash[23044]: audit 2026-03-10T13:44:47.546625+0000 mon.a (mon.0) 368 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-10T13:44:47.999 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:44:47 vm07 bash[23044]: audit 2026-03-10T13:44:47.546625+0000 mon.a (mon.0) 368 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-10T13:44:47.999 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:44:47 vm07 bash[23044]: audit 2026-03-10T13:44:47.547925+0000 mon.a (mon.0) 369 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-10T13:44:47.999 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:44:47 vm07 bash[23044]: audit 2026-03-10T13:44:47.547925+0000 mon.a (mon.0) 369 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-10T13:44:47.999 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:44:47 vm07 bash[23044]: audit 2026-03-10T13:44:47.548371+0000 mon.a (mon.0) 370 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T13:44:47.999 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:44:47 vm07 bash[23044]: audit 2026-03-10T13:44:47.548371+0000 mon.a (mon.0) 370 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T13:44:48.087 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:44:47 vm08 bash[23387]: cluster 2026-03-10T13:44:46.213601+0000 mgr.a (mgr.14150) 120 : cluster [DBG] pgmap v72: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T13:44:48.087 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:44:47 vm08 bash[23387]: cluster 2026-03-10T13:44:46.213601+0000 mgr.a (mgr.14150) 120 : cluster [DBG] pgmap v72: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T13:44:48.087 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:44:47 vm08 bash[23387]: audit 2026-03-10T13:44:47.546625+0000 mon.a (mon.0) 368 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-10T13:44:48.087 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:44:47 vm08 bash[23387]: audit 2026-03-10T13:44:47.546625+0000 mon.a (mon.0) 368 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-10T13:44:48.087 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:44:47 vm08 bash[23387]: audit 2026-03-10T13:44:47.547925+0000 mon.a (mon.0) 369 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-10T13:44:48.087 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:44:47 vm08 bash[23387]: audit 2026-03-10T13:44:47.547925+0000 mon.a (mon.0) 369 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-10T13:44:48.087 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:44:47 vm08 bash[23387]: audit 2026-03-10T13:44:47.548371+0000 mon.a (mon.0) 370 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T13:44:48.087 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:44:47 vm08 bash[23387]: audit 2026-03-10T13:44:47.548371+0000 mon.a (mon.0) 370 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T13:44:48.966 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:44:48 vm00 bash[20748]: audit 2026-03-10T13:44:47.545334+0000 mgr.a (mgr.14150) 121 : audit [DBG] from='client.24154 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm08:/dev/vde", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T13:44:48.966 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:44:48 vm00 bash[20748]: audit 2026-03-10T13:44:47.545334+0000 mgr.a (mgr.14150) 121 : audit [DBG] from='client.24154 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm08:/dev/vde", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T13:44:48.999 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:44:48 vm07 bash[23044]: audit 2026-03-10T13:44:47.545334+0000 mgr.a (mgr.14150) 121 : audit [DBG] from='client.24154 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm08:/dev/vde", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T13:44:48.999 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:44:48 vm07 bash[23044]: audit 2026-03-10T13:44:47.545334+0000 mgr.a (mgr.14150) 121 : audit [DBG] from='client.24154 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm08:/dev/vde", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T13:44:49.087 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:44:48 vm08 bash[23387]: audit 2026-03-10T13:44:47.545334+0000 mgr.a (mgr.14150) 121 : audit [DBG] from='client.24154 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm08:/dev/vde", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T13:44:49.087 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:44:48 vm08 bash[23387]: audit 2026-03-10T13:44:47.545334+0000 mgr.a (mgr.14150) 121 : audit [DBG] from='client.24154 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm08:/dev/vde", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T13:44:49.966 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:44:49 vm00 bash[20748]: cluster 2026-03-10T13:44:48.213786+0000 mgr.a (mgr.14150) 122 : cluster [DBG] pgmap v73: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T13:44:49.966 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:44:49 vm00 bash[20748]: cluster 2026-03-10T13:44:48.213786+0000 mgr.a (mgr.14150) 122 : cluster [DBG] pgmap v73: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T13:44:49.999 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:44:49 vm07 bash[23044]: cluster 2026-03-10T13:44:48.213786+0000 mgr.a (mgr.14150) 122 : cluster [DBG] pgmap v73: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T13:44:49.999 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:44:49 vm07 bash[23044]: cluster 2026-03-10T13:44:48.213786+0000 mgr.a (mgr.14150) 122 : cluster [DBG] pgmap v73: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T13:44:50.087 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:44:49 vm08 bash[23387]: cluster 2026-03-10T13:44:48.213786+0000 mgr.a (mgr.14150) 122 : cluster [DBG] pgmap v73: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T13:44:50.087 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:44:49 vm08 bash[23387]: cluster 2026-03-10T13:44:48.213786+0000 mgr.a (mgr.14150) 122 : cluster [DBG] pgmap v73: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T13:44:51.966 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:44:51 vm00 bash[20748]: cluster 2026-03-10T13:44:50.214048+0000 mgr.a (mgr.14150) 123 : cluster [DBG] pgmap v74: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T13:44:51.966 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:44:51 vm00 bash[20748]: cluster 2026-03-10T13:44:50.214048+0000 mgr.a (mgr.14150) 123 : cluster [DBG] pgmap v74: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T13:44:51.999 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:44:51 vm07 bash[23044]: cluster 2026-03-10T13:44:50.214048+0000 mgr.a (mgr.14150) 123 : cluster [DBG] pgmap v74: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T13:44:51.999 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:44:51 vm07 bash[23044]: cluster 2026-03-10T13:44:50.214048+0000 mgr.a (mgr.14150) 123 : cluster [DBG] pgmap v74: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T13:44:52.087 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:44:51 vm08 bash[23387]: cluster 2026-03-10T13:44:50.214048+0000 mgr.a (mgr.14150) 123 : cluster [DBG] pgmap v74: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T13:44:52.087 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:44:51 vm08 bash[23387]: cluster 2026-03-10T13:44:50.214048+0000 mgr.a (mgr.14150) 123 : cluster [DBG] pgmap v74: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T13:44:53.966 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:44:53 vm00 bash[20748]: cluster 2026-03-10T13:44:52.214250+0000 mgr.a (mgr.14150) 124 : cluster [DBG] pgmap v75: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T13:44:53.966 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:44:53 vm00 bash[20748]: cluster 2026-03-10T13:44:52.214250+0000 mgr.a (mgr.14150) 124 : cluster [DBG] pgmap v75: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T13:44:53.966 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:44:53 vm00 bash[20748]: audit 2026-03-10T13:44:52.870561+0000 mon.b (mon.2) 7 : audit [INF] from='client.? 192.168.123.108:0/923805459' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "84b6e04b-cad7-4941-bbb1-4ca53f9ed622"}]: dispatch 2026-03-10T13:44:53.967 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:44:53 vm00 bash[20748]: audit 2026-03-10T13:44:52.870561+0000 mon.b (mon.2) 7 : audit [INF] from='client.? 192.168.123.108:0/923805459' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "84b6e04b-cad7-4941-bbb1-4ca53f9ed622"}]: dispatch 2026-03-10T13:44:53.967 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:44:53 vm00 bash[20748]: audit 2026-03-10T13:44:52.872899+0000 mon.a (mon.0) 371 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "84b6e04b-cad7-4941-bbb1-4ca53f9ed622"}]: dispatch 2026-03-10T13:44:53.967 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:44:53 vm00 bash[20748]: audit 2026-03-10T13:44:52.872899+0000 mon.a (mon.0) 371 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "84b6e04b-cad7-4941-bbb1-4ca53f9ed622"}]: dispatch 2026-03-10T13:44:53.967 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:44:53 vm00 bash[20748]: audit 2026-03-10T13:44:52.875810+0000 mon.a (mon.0) 372 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "84b6e04b-cad7-4941-bbb1-4ca53f9ed622"}]': finished 2026-03-10T13:44:53.967 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:44:53 vm00 bash[20748]: audit 2026-03-10T13:44:52.875810+0000 mon.a (mon.0) 372 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "84b6e04b-cad7-4941-bbb1-4ca53f9ed622"}]': finished 2026-03-10T13:44:53.967 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:44:53 vm00 bash[20748]: cluster 2026-03-10T13:44:52.878615+0000 mon.a (mon.0) 373 : cluster [DBG] osdmap e15: 3 total, 2 up, 3 in 2026-03-10T13:44:53.967 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:44:53 vm00 bash[20748]: cluster 2026-03-10T13:44:52.878615+0000 mon.a (mon.0) 373 : cluster [DBG] osdmap e15: 3 total, 2 up, 3 in 2026-03-10T13:44:53.967 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:44:53 vm00 bash[20748]: audit 2026-03-10T13:44:52.878697+0000 mon.a (mon.0) 374 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T13:44:53.967 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:44:53 vm00 bash[20748]: audit 2026-03-10T13:44:52.878697+0000 mon.a (mon.0) 374 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T13:44:53.967 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:44:53 vm00 bash[20748]: audit 2026-03-10T13:44:53.471254+0000 mon.a (mon.0) 375 : audit [DBG] from='client.? 192.168.123.108:0/983946491' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-10T13:44:53.967 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:44:53 vm00 bash[20748]: audit 2026-03-10T13:44:53.471254+0000 mon.a (mon.0) 375 : audit [DBG] from='client.? 192.168.123.108:0/983946491' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-10T13:44:53.999 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:44:53 vm07 bash[23044]: cluster 2026-03-10T13:44:52.214250+0000 mgr.a (mgr.14150) 124 : cluster [DBG] pgmap v75: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T13:44:53.999 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:44:53 vm07 bash[23044]: cluster 2026-03-10T13:44:52.214250+0000 mgr.a (mgr.14150) 124 : cluster [DBG] pgmap v75: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T13:44:53.999 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:44:53 vm07 bash[23044]: audit 2026-03-10T13:44:52.870561+0000 mon.b (mon.2) 7 : audit [INF] from='client.? 192.168.123.108:0/923805459' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "84b6e04b-cad7-4941-bbb1-4ca53f9ed622"}]: dispatch 2026-03-10T13:44:53.999 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:44:53 vm07 bash[23044]: audit 2026-03-10T13:44:52.870561+0000 mon.b (mon.2) 7 : audit [INF] from='client.? 192.168.123.108:0/923805459' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "84b6e04b-cad7-4941-bbb1-4ca53f9ed622"}]: dispatch 2026-03-10T13:44:53.999 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:44:53 vm07 bash[23044]: audit 2026-03-10T13:44:52.872899+0000 mon.a (mon.0) 371 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "84b6e04b-cad7-4941-bbb1-4ca53f9ed622"}]: dispatch 2026-03-10T13:44:53.999 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:44:53 vm07 bash[23044]: audit 2026-03-10T13:44:52.872899+0000 mon.a (mon.0) 371 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "84b6e04b-cad7-4941-bbb1-4ca53f9ed622"}]: dispatch 2026-03-10T13:44:53.999 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:44:53 vm07 bash[23044]: audit 2026-03-10T13:44:52.875810+0000 mon.a (mon.0) 372 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "84b6e04b-cad7-4941-bbb1-4ca53f9ed622"}]': finished 2026-03-10T13:44:53.999 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:44:53 vm07 bash[23044]: audit 2026-03-10T13:44:52.875810+0000 mon.a (mon.0) 372 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "84b6e04b-cad7-4941-bbb1-4ca53f9ed622"}]': finished 2026-03-10T13:44:53.999 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:44:53 vm07 bash[23044]: cluster 2026-03-10T13:44:52.878615+0000 mon.a (mon.0) 373 : cluster [DBG] osdmap e15: 3 total, 2 up, 3 in 2026-03-10T13:44:53.999 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:44:53 vm07 bash[23044]: cluster 2026-03-10T13:44:52.878615+0000 mon.a (mon.0) 373 : cluster [DBG] osdmap e15: 3 total, 2 up, 3 in 2026-03-10T13:44:53.999 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:44:53 vm07 bash[23044]: audit 2026-03-10T13:44:52.878697+0000 mon.a (mon.0) 374 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T13:44:53.999 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:44:53 vm07 bash[23044]: audit 2026-03-10T13:44:52.878697+0000 mon.a (mon.0) 374 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T13:44:53.999 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:44:53 vm07 bash[23044]: audit 2026-03-10T13:44:53.471254+0000 mon.a (mon.0) 375 : audit [DBG] from='client.? 192.168.123.108:0/983946491' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-10T13:44:53.999 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:44:53 vm07 bash[23044]: audit 2026-03-10T13:44:53.471254+0000 mon.a (mon.0) 375 : audit [DBG] from='client.? 192.168.123.108:0/983946491' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-10T13:44:54.087 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:44:53 vm08 bash[23387]: cluster 2026-03-10T13:44:52.214250+0000 mgr.a (mgr.14150) 124 : cluster [DBG] pgmap v75: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T13:44:54.087 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:44:53 vm08 bash[23387]: cluster 2026-03-10T13:44:52.214250+0000 mgr.a (mgr.14150) 124 : cluster [DBG] pgmap v75: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T13:44:54.087 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:44:53 vm08 bash[23387]: audit 2026-03-10T13:44:52.870561+0000 mon.b (mon.2) 7 : audit [INF] from='client.? 192.168.123.108:0/923805459' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "84b6e04b-cad7-4941-bbb1-4ca53f9ed622"}]: dispatch 2026-03-10T13:44:54.087 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:44:53 vm08 bash[23387]: audit 2026-03-10T13:44:52.870561+0000 mon.b (mon.2) 7 : audit [INF] from='client.? 192.168.123.108:0/923805459' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "84b6e04b-cad7-4941-bbb1-4ca53f9ed622"}]: dispatch 2026-03-10T13:44:54.087 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:44:53 vm08 bash[23387]: audit 2026-03-10T13:44:52.872899+0000 mon.a (mon.0) 371 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "84b6e04b-cad7-4941-bbb1-4ca53f9ed622"}]: dispatch 2026-03-10T13:44:54.087 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:44:53 vm08 bash[23387]: audit 2026-03-10T13:44:52.872899+0000 mon.a (mon.0) 371 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "84b6e04b-cad7-4941-bbb1-4ca53f9ed622"}]: dispatch 2026-03-10T13:44:54.087 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:44:53 vm08 bash[23387]: audit 2026-03-10T13:44:52.875810+0000 mon.a (mon.0) 372 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "84b6e04b-cad7-4941-bbb1-4ca53f9ed622"}]': finished 2026-03-10T13:44:54.087 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:44:53 vm08 bash[23387]: audit 2026-03-10T13:44:52.875810+0000 mon.a (mon.0) 372 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "84b6e04b-cad7-4941-bbb1-4ca53f9ed622"}]': finished 2026-03-10T13:44:54.087 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:44:53 vm08 bash[23387]: cluster 2026-03-10T13:44:52.878615+0000 mon.a (mon.0) 373 : cluster [DBG] osdmap e15: 3 total, 2 up, 3 in 2026-03-10T13:44:54.087 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:44:53 vm08 bash[23387]: cluster 2026-03-10T13:44:52.878615+0000 mon.a (mon.0) 373 : cluster [DBG] osdmap e15: 3 total, 2 up, 3 in 2026-03-10T13:44:54.087 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:44:53 vm08 bash[23387]: audit 2026-03-10T13:44:52.878697+0000 mon.a (mon.0) 374 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T13:44:54.088 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:44:53 vm08 bash[23387]: audit 2026-03-10T13:44:52.878697+0000 mon.a (mon.0) 374 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T13:44:54.088 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:44:53 vm08 bash[23387]: audit 2026-03-10T13:44:53.471254+0000 mon.a (mon.0) 375 : audit [DBG] from='client.? 192.168.123.108:0/983946491' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-10T13:44:54.088 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:44:53 vm08 bash[23387]: audit 2026-03-10T13:44:53.471254+0000 mon.a (mon.0) 375 : audit [DBG] from='client.? 192.168.123.108:0/983946491' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-10T13:44:55.966 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:44:55 vm00 bash[20748]: cluster 2026-03-10T13:44:54.214474+0000 mgr.a (mgr.14150) 125 : cluster [DBG] pgmap v77: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T13:44:55.966 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:44:55 vm00 bash[20748]: cluster 2026-03-10T13:44:54.214474+0000 mgr.a (mgr.14150) 125 : cluster [DBG] pgmap v77: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T13:44:55.999 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:44:55 vm07 bash[23044]: cluster 2026-03-10T13:44:54.214474+0000 mgr.a (mgr.14150) 125 : cluster [DBG] pgmap v77: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T13:44:55.999 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:44:55 vm07 bash[23044]: cluster 2026-03-10T13:44:54.214474+0000 mgr.a (mgr.14150) 125 : cluster [DBG] pgmap v77: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T13:44:56.087 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:44:55 vm08 bash[23387]: cluster 2026-03-10T13:44:54.214474+0000 mgr.a (mgr.14150) 125 : cluster [DBG] pgmap v77: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T13:44:56.087 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:44:55 vm08 bash[23387]: cluster 2026-03-10T13:44:54.214474+0000 mgr.a (mgr.14150) 125 : cluster [DBG] pgmap v77: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T13:44:57.966 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:44:57 vm00 bash[20748]: cluster 2026-03-10T13:44:56.214708+0000 mgr.a (mgr.14150) 126 : cluster [DBG] pgmap v78: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T13:44:57.966 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:44:57 vm00 bash[20748]: cluster 2026-03-10T13:44:56.214708+0000 mgr.a (mgr.14150) 126 : cluster [DBG] pgmap v78: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T13:44:57.999 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:44:57 vm07 bash[23044]: cluster 2026-03-10T13:44:56.214708+0000 mgr.a (mgr.14150) 126 : cluster [DBG] pgmap v78: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T13:44:57.999 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:44:57 vm07 bash[23044]: cluster 2026-03-10T13:44:56.214708+0000 mgr.a (mgr.14150) 126 : cluster [DBG] pgmap v78: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T13:44:58.087 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:44:57 vm08 bash[23387]: cluster 2026-03-10T13:44:56.214708+0000 mgr.a (mgr.14150) 126 : cluster [DBG] pgmap v78: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T13:44:58.087 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:44:57 vm08 bash[23387]: cluster 2026-03-10T13:44:56.214708+0000 mgr.a (mgr.14150) 126 : cluster [DBG] pgmap v78: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T13:44:59.966 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:44:59 vm00 bash[20748]: cluster 2026-03-10T13:44:58.214931+0000 mgr.a (mgr.14150) 127 : cluster [DBG] pgmap v79: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T13:44:59.966 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:44:59 vm00 bash[20748]: cluster 2026-03-10T13:44:58.214931+0000 mgr.a (mgr.14150) 127 : cluster [DBG] pgmap v79: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T13:44:59.999 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:44:59 vm07 bash[23044]: cluster 2026-03-10T13:44:58.214931+0000 mgr.a (mgr.14150) 127 : cluster [DBG] pgmap v79: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T13:44:59.999 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:44:59 vm07 bash[23044]: cluster 2026-03-10T13:44:58.214931+0000 mgr.a (mgr.14150) 127 : cluster [DBG] pgmap v79: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T13:45:00.087 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:44:59 vm08 bash[23387]: cluster 2026-03-10T13:44:58.214931+0000 mgr.a (mgr.14150) 127 : cluster [DBG] pgmap v79: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T13:45:00.087 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:44:59 vm08 bash[23387]: cluster 2026-03-10T13:44:58.214931+0000 mgr.a (mgr.14150) 127 : cluster [DBG] pgmap v79: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T13:45:01.825 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:45:01 vm08 bash[23387]: cluster 2026-03-10T13:45:00.215193+0000 mgr.a (mgr.14150) 128 : cluster [DBG] pgmap v80: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T13:45:01.825 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:45:01 vm08 bash[23387]: cluster 2026-03-10T13:45:00.215193+0000 mgr.a (mgr.14150) 128 : cluster [DBG] pgmap v80: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T13:45:01.825 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:45:01 vm08 bash[23387]: audit 2026-03-10T13:45:01.592612+0000 mon.a (mon.0) 376 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "osd.2"}]: dispatch 2026-03-10T13:45:01.825 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:45:01 vm08 bash[23387]: audit 2026-03-10T13:45:01.592612+0000 mon.a (mon.0) 376 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "osd.2"}]: dispatch 2026-03-10T13:45:01.825 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:45:01 vm08 bash[23387]: audit 2026-03-10T13:45:01.593071+0000 mon.a (mon.0) 377 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T13:45:01.825 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:45:01 vm08 bash[23387]: audit 2026-03-10T13:45:01.593071+0000 mon.a (mon.0) 377 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T13:45:01.999 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:45:01 vm07 bash[23044]: cluster 2026-03-10T13:45:00.215193+0000 mgr.a (mgr.14150) 128 : cluster [DBG] pgmap v80: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T13:45:01.999 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:45:01 vm07 bash[23044]: cluster 2026-03-10T13:45:00.215193+0000 mgr.a (mgr.14150) 128 : cluster [DBG] pgmap v80: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T13:45:01.999 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:45:01 vm07 bash[23044]: audit 2026-03-10T13:45:01.592612+0000 mon.a (mon.0) 376 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "osd.2"}]: dispatch 2026-03-10T13:45:01.999 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:45:01 vm07 bash[23044]: audit 2026-03-10T13:45:01.592612+0000 mon.a (mon.0) 376 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "osd.2"}]: dispatch 2026-03-10T13:45:01.999 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:45:01 vm07 bash[23044]: audit 2026-03-10T13:45:01.593071+0000 mon.a (mon.0) 377 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T13:45:01.999 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:45:01 vm07 bash[23044]: audit 2026-03-10T13:45:01.593071+0000 mon.a (mon.0) 377 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T13:45:02.216 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:45:01 vm00 bash[20748]: cluster 2026-03-10T13:45:00.215193+0000 mgr.a (mgr.14150) 128 : cluster [DBG] pgmap v80: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T13:45:02.216 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:45:01 vm00 bash[20748]: cluster 2026-03-10T13:45:00.215193+0000 mgr.a (mgr.14150) 128 : cluster [DBG] pgmap v80: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T13:45:02.216 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:45:01 vm00 bash[20748]: audit 2026-03-10T13:45:01.592612+0000 mon.a (mon.0) 376 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "osd.2"}]: dispatch 2026-03-10T13:45:02.216 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:45:01 vm00 bash[20748]: audit 2026-03-10T13:45:01.592612+0000 mon.a (mon.0) 376 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "osd.2"}]: dispatch 2026-03-10T13:45:02.216 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:45:01 vm00 bash[20748]: audit 2026-03-10T13:45:01.593071+0000 mon.a (mon.0) 377 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T13:45:02.216 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:45:01 vm00 bash[20748]: audit 2026-03-10T13:45:01.593071+0000 mon.a (mon.0) 377 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T13:45:02.460 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:45:02 vm08 systemd[1]: /etc/systemd/system/ceph-c9620084-1c86-11f1-bcc5-e3fb709eab0a@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T13:45:02.730 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:45:02 vm08 systemd[1]: /etc/systemd/system/ceph-c9620084-1c86-11f1-bcc5-e3fb709eab0a@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T13:45:02.730 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:45:02 vm08 bash[23387]: cephadm 2026-03-10T13:45:01.593416+0000 mgr.a (mgr.14150) 129 : cephadm [INF] Deploying daemon osd.2 on vm08 2026-03-10T13:45:02.730 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:45:02 vm08 bash[23387]: cephadm 2026-03-10T13:45:01.593416+0000 mgr.a (mgr.14150) 129 : cephadm [INF] Deploying daemon osd.2 on vm08 2026-03-10T13:45:02.730 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:45:02 vm08 bash[23387]: audit 2026-03-10T13:45:02.555055+0000 mon.a (mon.0) 378 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T13:45:02.730 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:45:02 vm08 bash[23387]: audit 2026-03-10T13:45:02.555055+0000 mon.a (mon.0) 378 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T13:45:02.730 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:45:02 vm08 bash[23387]: audit 2026-03-10T13:45:02.559237+0000 mon.a (mon.0) 379 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:45:02.730 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:45:02 vm08 bash[23387]: audit 2026-03-10T13:45:02.559237+0000 mon.a (mon.0) 379 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:45:02.730 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:45:02 vm08 bash[23387]: audit 2026-03-10T13:45:02.562672+0000 mon.a (mon.0) 380 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:45:02.730 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:45:02 vm08 bash[23387]: audit 2026-03-10T13:45:02.562672+0000 mon.a (mon.0) 380 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:45:02.999 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:45:02 vm07 bash[23044]: cephadm 2026-03-10T13:45:01.593416+0000 mgr.a (mgr.14150) 129 : cephadm [INF] Deploying daemon osd.2 on vm08 2026-03-10T13:45:02.999 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:45:02 vm07 bash[23044]: cephadm 2026-03-10T13:45:01.593416+0000 mgr.a (mgr.14150) 129 : cephadm [INF] Deploying daemon osd.2 on vm08 2026-03-10T13:45:02.999 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:45:02 vm07 bash[23044]: audit 2026-03-10T13:45:02.555055+0000 mon.a (mon.0) 378 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T13:45:02.999 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:45:02 vm07 bash[23044]: audit 2026-03-10T13:45:02.555055+0000 mon.a (mon.0) 378 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T13:45:02.999 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:45:02 vm07 bash[23044]: audit 2026-03-10T13:45:02.559237+0000 mon.a (mon.0) 379 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:45:02.999 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:45:02 vm07 bash[23044]: audit 2026-03-10T13:45:02.559237+0000 mon.a (mon.0) 379 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:45:02.999 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:45:02 vm07 bash[23044]: audit 2026-03-10T13:45:02.562672+0000 mon.a (mon.0) 380 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:45:02.999 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:45:02 vm07 bash[23044]: audit 2026-03-10T13:45:02.562672+0000 mon.a (mon.0) 380 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:45:03.216 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:45:02 vm00 bash[20748]: cephadm 2026-03-10T13:45:01.593416+0000 mgr.a (mgr.14150) 129 : cephadm [INF] Deploying daemon osd.2 on vm08 2026-03-10T13:45:03.216 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:45:02 vm00 bash[20748]: cephadm 2026-03-10T13:45:01.593416+0000 mgr.a (mgr.14150) 129 : cephadm [INF] Deploying daemon osd.2 on vm08 2026-03-10T13:45:03.216 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:45:02 vm00 bash[20748]: audit 2026-03-10T13:45:02.555055+0000 mon.a (mon.0) 378 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T13:45:03.216 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:45:02 vm00 bash[20748]: audit 2026-03-10T13:45:02.555055+0000 mon.a (mon.0) 378 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T13:45:03.216 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:45:02 vm00 bash[20748]: audit 2026-03-10T13:45:02.559237+0000 mon.a (mon.0) 379 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:45:03.216 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:45:02 vm00 bash[20748]: audit 2026-03-10T13:45:02.559237+0000 mon.a (mon.0) 379 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:45:03.216 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:45:02 vm00 bash[20748]: audit 2026-03-10T13:45:02.562672+0000 mon.a (mon.0) 380 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:45:03.216 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:45:02 vm00 bash[20748]: audit 2026-03-10T13:45:02.562672+0000 mon.a (mon.0) 380 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:45:03.953 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:45:03 vm08 bash[23387]: cluster 2026-03-10T13:45:02.215380+0000 mgr.a (mgr.14150) 130 : cluster [DBG] pgmap v81: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T13:45:03.953 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:45:03 vm08 bash[23387]: cluster 2026-03-10T13:45:02.215380+0000 mgr.a (mgr.14150) 130 : cluster [DBG] pgmap v81: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T13:45:03.999 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:45:03 vm07 bash[23044]: cluster 2026-03-10T13:45:02.215380+0000 mgr.a (mgr.14150) 130 : cluster [DBG] pgmap v81: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T13:45:03.999 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:45:03 vm07 bash[23044]: cluster 2026-03-10T13:45:02.215380+0000 mgr.a (mgr.14150) 130 : cluster [DBG] pgmap v81: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T13:45:04.216 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:45:03 vm00 bash[20748]: cluster 2026-03-10T13:45:02.215380+0000 mgr.a (mgr.14150) 130 : cluster [DBG] pgmap v81: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T13:45:04.216 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:45:03 vm00 bash[20748]: cluster 2026-03-10T13:45:02.215380+0000 mgr.a (mgr.14150) 130 : cluster [DBG] pgmap v81: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T13:45:05.999 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:45:05 vm07 bash[23044]: cluster 2026-03-10T13:45:04.215568+0000 mgr.a (mgr.14150) 131 : cluster [DBG] pgmap v82: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T13:45:05.999 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:45:05 vm07 bash[23044]: cluster 2026-03-10T13:45:04.215568+0000 mgr.a (mgr.14150) 131 : cluster [DBG] pgmap v82: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T13:45:06.010 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:45:05 vm08 bash[23387]: cluster 2026-03-10T13:45:04.215568+0000 mgr.a (mgr.14150) 131 : cluster [DBG] pgmap v82: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T13:45:06.011 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:45:05 vm08 bash[23387]: cluster 2026-03-10T13:45:04.215568+0000 mgr.a (mgr.14150) 131 : cluster [DBG] pgmap v82: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T13:45:06.216 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:45:05 vm00 bash[20748]: cluster 2026-03-10T13:45:04.215568+0000 mgr.a (mgr.14150) 131 : cluster [DBG] pgmap v82: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T13:45:06.216 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:45:05 vm00 bash[20748]: cluster 2026-03-10T13:45:04.215568+0000 mgr.a (mgr.14150) 131 : cluster [DBG] pgmap v82: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T13:45:06.999 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:45:06 vm07 bash[23044]: audit 2026-03-10T13:45:06.015853+0000 mon.c (mon.1) 5 : audit [INF] from='osd.2 [v2:192.168.123.108:6800/941417901,v1:192.168.123.108:6801/941417901]' entity='osd.2' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]: dispatch 2026-03-10T13:45:06.999 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:45:06 vm07 bash[23044]: audit 2026-03-10T13:45:06.015853+0000 mon.c (mon.1) 5 : audit [INF] from='osd.2 [v2:192.168.123.108:6800/941417901,v1:192.168.123.108:6801/941417901]' entity='osd.2' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]: dispatch 2026-03-10T13:45:06.999 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:45:06 vm07 bash[23044]: audit 2026-03-10T13:45:06.016246+0000 mon.a (mon.0) 381 : audit [INF] from='osd.2 ' entity='osd.2' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]: dispatch 2026-03-10T13:45:06.999 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:45:06 vm07 bash[23044]: audit 2026-03-10T13:45:06.016246+0000 mon.a (mon.0) 381 : audit [INF] from='osd.2 ' entity='osd.2' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]: dispatch 2026-03-10T13:45:07.087 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:45:06 vm08 bash[23387]: audit 2026-03-10T13:45:06.015853+0000 mon.c (mon.1) 5 : audit [INF] from='osd.2 [v2:192.168.123.108:6800/941417901,v1:192.168.123.108:6801/941417901]' entity='osd.2' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]: dispatch 2026-03-10T13:45:07.087 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:45:06 vm08 bash[23387]: audit 2026-03-10T13:45:06.015853+0000 mon.c (mon.1) 5 : audit [INF] from='osd.2 [v2:192.168.123.108:6800/941417901,v1:192.168.123.108:6801/941417901]' entity='osd.2' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]: dispatch 2026-03-10T13:45:07.087 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:45:06 vm08 bash[23387]: audit 2026-03-10T13:45:06.016246+0000 mon.a (mon.0) 381 : audit [INF] from='osd.2 ' entity='osd.2' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]: dispatch 2026-03-10T13:45:07.087 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:45:06 vm08 bash[23387]: audit 2026-03-10T13:45:06.016246+0000 mon.a (mon.0) 381 : audit [INF] from='osd.2 ' entity='osd.2' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]: dispatch 2026-03-10T13:45:07.216 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:45:06 vm00 bash[20748]: audit 2026-03-10T13:45:06.015853+0000 mon.c (mon.1) 5 : audit [INF] from='osd.2 [v2:192.168.123.108:6800/941417901,v1:192.168.123.108:6801/941417901]' entity='osd.2' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]: dispatch 2026-03-10T13:45:07.216 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:45:06 vm00 bash[20748]: audit 2026-03-10T13:45:06.015853+0000 mon.c (mon.1) 5 : audit [INF] from='osd.2 [v2:192.168.123.108:6800/941417901,v1:192.168.123.108:6801/941417901]' entity='osd.2' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]: dispatch 2026-03-10T13:45:07.216 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:45:06 vm00 bash[20748]: audit 2026-03-10T13:45:06.016246+0000 mon.a (mon.0) 381 : audit [INF] from='osd.2 ' entity='osd.2' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]: dispatch 2026-03-10T13:45:07.216 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:45:06 vm00 bash[20748]: audit 2026-03-10T13:45:06.016246+0000 mon.a (mon.0) 381 : audit [INF] from='osd.2 ' entity='osd.2' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]: dispatch 2026-03-10T13:45:08.087 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:45:07 vm08 bash[23387]: cluster 2026-03-10T13:45:06.215773+0000 mgr.a (mgr.14150) 132 : cluster [DBG] pgmap v83: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T13:45:08.087 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:45:07 vm08 bash[23387]: cluster 2026-03-10T13:45:06.215773+0000 mgr.a (mgr.14150) 132 : cluster [DBG] pgmap v83: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T13:45:08.087 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:45:07 vm08 bash[23387]: audit 2026-03-10T13:45:06.748250+0000 mon.a (mon.0) 382 : audit [INF] from='osd.2 ' entity='osd.2' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]': finished 2026-03-10T13:45:08.087 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:45:07 vm08 bash[23387]: audit 2026-03-10T13:45:06.748250+0000 mon.a (mon.0) 382 : audit [INF] from='osd.2 ' entity='osd.2' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]': finished 2026-03-10T13:45:08.087 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:45:07 vm08 bash[23387]: audit 2026-03-10T13:45:06.750696+0000 mon.c (mon.1) 6 : audit [INF] from='osd.2 [v2:192.168.123.108:6800/941417901,v1:192.168.123.108:6801/941417901]' entity='osd.2' cmd=[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=vm08", "root=default"]}]: dispatch 2026-03-10T13:45:08.087 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:45:07 vm08 bash[23387]: audit 2026-03-10T13:45:06.750696+0000 mon.c (mon.1) 6 : audit [INF] from='osd.2 [v2:192.168.123.108:6800/941417901,v1:192.168.123.108:6801/941417901]' entity='osd.2' cmd=[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=vm08", "root=default"]}]: dispatch 2026-03-10T13:45:08.087 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:45:07 vm08 bash[23387]: cluster 2026-03-10T13:45:06.751077+0000 mon.a (mon.0) 383 : cluster [DBG] osdmap e16: 3 total, 2 up, 3 in 2026-03-10T13:45:08.087 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:45:07 vm08 bash[23387]: cluster 2026-03-10T13:45:06.751077+0000 mon.a (mon.0) 383 : cluster [DBG] osdmap e16: 3 total, 2 up, 3 in 2026-03-10T13:45:08.087 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:45:07 vm08 bash[23387]: audit 2026-03-10T13:45:06.751307+0000 mon.a (mon.0) 384 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T13:45:08.088 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:45:07 vm08 bash[23387]: audit 2026-03-10T13:45:06.751307+0000 mon.a (mon.0) 384 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T13:45:08.088 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:45:07 vm08 bash[23387]: audit 2026-03-10T13:45:06.751395+0000 mon.a (mon.0) 385 : audit [INF] from='osd.2 ' entity='osd.2' cmd=[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=vm08", "root=default"]}]: dispatch 2026-03-10T13:45:08.088 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:45:07 vm08 bash[23387]: audit 2026-03-10T13:45:06.751395+0000 mon.a (mon.0) 385 : audit [INF] from='osd.2 ' entity='osd.2' cmd=[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=vm08", "root=default"]}]: dispatch 2026-03-10T13:45:08.216 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:45:07 vm00 bash[20748]: cluster 2026-03-10T13:45:06.215773+0000 mgr.a (mgr.14150) 132 : cluster [DBG] pgmap v83: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T13:45:08.216 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:45:07 vm00 bash[20748]: cluster 2026-03-10T13:45:06.215773+0000 mgr.a (mgr.14150) 132 : cluster [DBG] pgmap v83: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T13:45:08.216 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:45:07 vm00 bash[20748]: audit 2026-03-10T13:45:06.748250+0000 mon.a (mon.0) 382 : audit [INF] from='osd.2 ' entity='osd.2' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]': finished 2026-03-10T13:45:08.216 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:45:07 vm00 bash[20748]: audit 2026-03-10T13:45:06.748250+0000 mon.a (mon.0) 382 : audit [INF] from='osd.2 ' entity='osd.2' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]': finished 2026-03-10T13:45:08.216 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:45:07 vm00 bash[20748]: audit 2026-03-10T13:45:06.750696+0000 mon.c (mon.1) 6 : audit [INF] from='osd.2 [v2:192.168.123.108:6800/941417901,v1:192.168.123.108:6801/941417901]' entity='osd.2' cmd=[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=vm08", "root=default"]}]: dispatch 2026-03-10T13:45:08.216 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:45:07 vm00 bash[20748]: audit 2026-03-10T13:45:06.750696+0000 mon.c (mon.1) 6 : audit [INF] from='osd.2 [v2:192.168.123.108:6800/941417901,v1:192.168.123.108:6801/941417901]' entity='osd.2' cmd=[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=vm08", "root=default"]}]: dispatch 2026-03-10T13:45:08.216 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:45:07 vm00 bash[20748]: cluster 2026-03-10T13:45:06.751077+0000 mon.a (mon.0) 383 : cluster [DBG] osdmap e16: 3 total, 2 up, 3 in 2026-03-10T13:45:08.216 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:45:07 vm00 bash[20748]: cluster 2026-03-10T13:45:06.751077+0000 mon.a (mon.0) 383 : cluster [DBG] osdmap e16: 3 total, 2 up, 3 in 2026-03-10T13:45:08.216 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:45:07 vm00 bash[20748]: audit 2026-03-10T13:45:06.751307+0000 mon.a (mon.0) 384 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T13:45:08.216 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:45:07 vm00 bash[20748]: audit 2026-03-10T13:45:06.751307+0000 mon.a (mon.0) 384 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T13:45:08.216 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:45:07 vm00 bash[20748]: audit 2026-03-10T13:45:06.751395+0000 mon.a (mon.0) 385 : audit [INF] from='osd.2 ' entity='osd.2' cmd=[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=vm08", "root=default"]}]: dispatch 2026-03-10T13:45:08.216 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:45:07 vm00 bash[20748]: audit 2026-03-10T13:45:06.751395+0000 mon.a (mon.0) 385 : audit [INF] from='osd.2 ' entity='osd.2' cmd=[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=vm08", "root=default"]}]: dispatch 2026-03-10T13:45:08.249 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:45:07 vm07 bash[23044]: cluster 2026-03-10T13:45:06.215773+0000 mgr.a (mgr.14150) 132 : cluster [DBG] pgmap v83: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T13:45:08.249 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:45:07 vm07 bash[23044]: cluster 2026-03-10T13:45:06.215773+0000 mgr.a (mgr.14150) 132 : cluster [DBG] pgmap v83: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T13:45:08.249 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:45:07 vm07 bash[23044]: audit 2026-03-10T13:45:06.748250+0000 mon.a (mon.0) 382 : audit [INF] from='osd.2 ' entity='osd.2' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]': finished 2026-03-10T13:45:08.249 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:45:07 vm07 bash[23044]: audit 2026-03-10T13:45:06.748250+0000 mon.a (mon.0) 382 : audit [INF] from='osd.2 ' entity='osd.2' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]': finished 2026-03-10T13:45:08.249 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:45:07 vm07 bash[23044]: audit 2026-03-10T13:45:06.750696+0000 mon.c (mon.1) 6 : audit [INF] from='osd.2 [v2:192.168.123.108:6800/941417901,v1:192.168.123.108:6801/941417901]' entity='osd.2' cmd=[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=vm08", "root=default"]}]: dispatch 2026-03-10T13:45:08.249 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:45:07 vm07 bash[23044]: audit 2026-03-10T13:45:06.750696+0000 mon.c (mon.1) 6 : audit [INF] from='osd.2 [v2:192.168.123.108:6800/941417901,v1:192.168.123.108:6801/941417901]' entity='osd.2' cmd=[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=vm08", "root=default"]}]: dispatch 2026-03-10T13:45:08.249 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:45:07 vm07 bash[23044]: cluster 2026-03-10T13:45:06.751077+0000 mon.a (mon.0) 383 : cluster [DBG] osdmap e16: 3 total, 2 up, 3 in 2026-03-10T13:45:08.249 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:45:07 vm07 bash[23044]: cluster 2026-03-10T13:45:06.751077+0000 mon.a (mon.0) 383 : cluster [DBG] osdmap e16: 3 total, 2 up, 3 in 2026-03-10T13:45:08.249 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:45:07 vm07 bash[23044]: audit 2026-03-10T13:45:06.751307+0000 mon.a (mon.0) 384 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T13:45:08.249 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:45:07 vm07 bash[23044]: audit 2026-03-10T13:45:06.751307+0000 mon.a (mon.0) 384 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T13:45:08.249 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:45:07 vm07 bash[23044]: audit 2026-03-10T13:45:06.751395+0000 mon.a (mon.0) 385 : audit [INF] from='osd.2 ' entity='osd.2' cmd=[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=vm08", "root=default"]}]: dispatch 2026-03-10T13:45:08.249 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:45:07 vm07 bash[23044]: audit 2026-03-10T13:45:06.751395+0000 mon.a (mon.0) 385 : audit [INF] from='osd.2 ' entity='osd.2' cmd=[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=vm08", "root=default"]}]: dispatch 2026-03-10T13:45:08.786 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:45:08 vm08 bash[23387]: audit 2026-03-10T13:45:07.751084+0000 mon.a (mon.0) 386 : audit [INF] from='osd.2 ' entity='osd.2' cmd='[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=vm08", "root=default"]}]': finished 2026-03-10T13:45:08.786 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:45:08 vm08 bash[23387]: audit 2026-03-10T13:45:07.751084+0000 mon.a (mon.0) 386 : audit [INF] from='osd.2 ' entity='osd.2' cmd='[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=vm08", "root=default"]}]': finished 2026-03-10T13:45:08.786 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:45:08 vm08 bash[23387]: cluster 2026-03-10T13:45:07.753896+0000 mon.a (mon.0) 387 : cluster [DBG] osdmap e17: 3 total, 2 up, 3 in 2026-03-10T13:45:08.786 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:45:08 vm08 bash[23387]: cluster 2026-03-10T13:45:07.753896+0000 mon.a (mon.0) 387 : cluster [DBG] osdmap e17: 3 total, 2 up, 3 in 2026-03-10T13:45:08.786 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:45:08 vm08 bash[23387]: audit 2026-03-10T13:45:07.754625+0000 mon.a (mon.0) 388 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T13:45:08.786 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:45:08 vm08 bash[23387]: audit 2026-03-10T13:45:07.754625+0000 mon.a (mon.0) 388 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T13:45:08.786 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:45:08 vm08 bash[23387]: audit 2026-03-10T13:45:07.756821+0000 mon.a (mon.0) 389 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T13:45:08.786 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:45:08 vm08 bash[23387]: audit 2026-03-10T13:45:07.756821+0000 mon.a (mon.0) 389 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T13:45:08.786 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:45:08 vm08 bash[23387]: audit 2026-03-10T13:45:08.598171+0000 mon.a (mon.0) 390 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:45:08.786 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:45:08 vm08 bash[23387]: audit 2026-03-10T13:45:08.598171+0000 mon.a (mon.0) 390 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:45:08.786 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:45:08 vm08 bash[23387]: audit 2026-03-10T13:45:08.602019+0000 mon.a (mon.0) 391 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:45:08.786 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:45:08 vm08 bash[23387]: audit 2026-03-10T13:45:08.602019+0000 mon.a (mon.0) 391 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:45:09.216 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:45:08 vm00 bash[20748]: audit 2026-03-10T13:45:07.751084+0000 mon.a (mon.0) 386 : audit [INF] from='osd.2 ' entity='osd.2' cmd='[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=vm08", "root=default"]}]': finished 2026-03-10T13:45:09.216 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:45:08 vm00 bash[20748]: audit 2026-03-10T13:45:07.751084+0000 mon.a (mon.0) 386 : audit [INF] from='osd.2 ' entity='osd.2' cmd='[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=vm08", "root=default"]}]': finished 2026-03-10T13:45:09.216 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:45:08 vm00 bash[20748]: cluster 2026-03-10T13:45:07.753896+0000 mon.a (mon.0) 387 : cluster [DBG] osdmap e17: 3 total, 2 up, 3 in 2026-03-10T13:45:09.216 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:45:08 vm00 bash[20748]: cluster 2026-03-10T13:45:07.753896+0000 mon.a (mon.0) 387 : cluster [DBG] osdmap e17: 3 total, 2 up, 3 in 2026-03-10T13:45:09.216 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:45:08 vm00 bash[20748]: audit 2026-03-10T13:45:07.754625+0000 mon.a (mon.0) 388 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T13:45:09.216 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:45:08 vm00 bash[20748]: audit 2026-03-10T13:45:07.754625+0000 mon.a (mon.0) 388 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T13:45:09.216 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:45:08 vm00 bash[20748]: audit 2026-03-10T13:45:07.756821+0000 mon.a (mon.0) 389 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T13:45:09.216 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:45:08 vm00 bash[20748]: audit 2026-03-10T13:45:07.756821+0000 mon.a (mon.0) 389 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T13:45:09.216 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:45:08 vm00 bash[20748]: audit 2026-03-10T13:45:08.598171+0000 mon.a (mon.0) 390 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:45:09.217 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:45:08 vm00 bash[20748]: audit 2026-03-10T13:45:08.598171+0000 mon.a (mon.0) 390 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:45:09.217 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:45:08 vm00 bash[20748]: audit 2026-03-10T13:45:08.602019+0000 mon.a (mon.0) 391 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:45:09.217 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:45:08 vm00 bash[20748]: audit 2026-03-10T13:45:08.602019+0000 mon.a (mon.0) 391 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:45:09.249 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:45:08 vm07 bash[23044]: audit 2026-03-10T13:45:07.751084+0000 mon.a (mon.0) 386 : audit [INF] from='osd.2 ' entity='osd.2' cmd='[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=vm08", "root=default"]}]': finished 2026-03-10T13:45:09.249 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:45:08 vm07 bash[23044]: audit 2026-03-10T13:45:07.751084+0000 mon.a (mon.0) 386 : audit [INF] from='osd.2 ' entity='osd.2' cmd='[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=vm08", "root=default"]}]': finished 2026-03-10T13:45:09.249 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:45:08 vm07 bash[23044]: cluster 2026-03-10T13:45:07.753896+0000 mon.a (mon.0) 387 : cluster [DBG] osdmap e17: 3 total, 2 up, 3 in 2026-03-10T13:45:09.249 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:45:08 vm07 bash[23044]: cluster 2026-03-10T13:45:07.753896+0000 mon.a (mon.0) 387 : cluster [DBG] osdmap e17: 3 total, 2 up, 3 in 2026-03-10T13:45:09.249 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:45:08 vm07 bash[23044]: audit 2026-03-10T13:45:07.754625+0000 mon.a (mon.0) 388 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T13:45:09.249 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:45:08 vm07 bash[23044]: audit 2026-03-10T13:45:07.754625+0000 mon.a (mon.0) 388 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T13:45:09.249 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:45:08 vm07 bash[23044]: audit 2026-03-10T13:45:07.756821+0000 mon.a (mon.0) 389 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T13:45:09.249 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:45:08 vm07 bash[23044]: audit 2026-03-10T13:45:07.756821+0000 mon.a (mon.0) 389 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T13:45:09.249 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:45:08 vm07 bash[23044]: audit 2026-03-10T13:45:08.598171+0000 mon.a (mon.0) 390 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:45:09.249 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:45:08 vm07 bash[23044]: audit 2026-03-10T13:45:08.598171+0000 mon.a (mon.0) 390 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:45:09.249 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:45:08 vm07 bash[23044]: audit 2026-03-10T13:45:08.602019+0000 mon.a (mon.0) 391 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:45:09.249 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:45:08 vm07 bash[23044]: audit 2026-03-10T13:45:08.602019+0000 mon.a (mon.0) 391 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:45:09.651 INFO:teuthology.orchestra.run.vm08.stdout:Created osd(s) 2 on host 'vm08' 2026-03-10T13:45:09.724 DEBUG:teuthology.orchestra.run.vm08:osd.2> sudo journalctl -f -n 0 -u ceph-c9620084-1c86-11f1-bcc5-e3fb709eab0a@osd.2.service 2026-03-10T13:45:09.724 INFO:tasks.cephadm:Waiting for 3 OSDs to come up... 2026-03-10T13:45:09.724 DEBUG:teuthology.orchestra.run.vm00:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid c9620084-1c86-11f1-bcc5-e3fb709eab0a -- ceph osd stat -f json 2026-03-10T13:45:10.087 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:45:09 vm08 bash[23387]: cluster 2026-03-10T13:45:06.989180+0000 osd.2 (osd.2) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-10T13:45:10.087 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:45:09 vm08 bash[23387]: cluster 2026-03-10T13:45:06.989180+0000 osd.2 (osd.2) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-10T13:45:10.088 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:45:09 vm08 bash[23387]: cluster 2026-03-10T13:45:06.989222+0000 osd.2 (osd.2) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-10T13:45:10.088 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:45:09 vm08 bash[23387]: cluster 2026-03-10T13:45:06.989222+0000 osd.2 (osd.2) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-10T13:45:10.088 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:45:09 vm08 bash[23387]: cluster 2026-03-10T13:45:08.215953+0000 mgr.a (mgr.14150) 133 : cluster [DBG] pgmap v86: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T13:45:10.088 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:45:09 vm08 bash[23387]: cluster 2026-03-10T13:45:08.215953+0000 mgr.a (mgr.14150) 133 : cluster [DBG] pgmap v86: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T13:45:10.088 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:45:09 vm08 bash[23387]: audit 2026-03-10T13:45:08.759452+0000 mon.a (mon.0) 392 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T13:45:10.088 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:45:09 vm08 bash[23387]: audit 2026-03-10T13:45:08.759452+0000 mon.a (mon.0) 392 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T13:45:10.088 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:45:09 vm08 bash[23387]: cluster 2026-03-10T13:45:08.772855+0000 mon.a (mon.0) 393 : cluster [INF] osd.2 [v2:192.168.123.108:6800/941417901,v1:192.168.123.108:6801/941417901] boot 2026-03-10T13:45:10.088 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:45:09 vm08 bash[23387]: cluster 2026-03-10T13:45:08.772855+0000 mon.a (mon.0) 393 : cluster [INF] osd.2 [v2:192.168.123.108:6800/941417901,v1:192.168.123.108:6801/941417901] boot 2026-03-10T13:45:10.088 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:45:09 vm08 bash[23387]: cluster 2026-03-10T13:45:08.772904+0000 mon.a (mon.0) 394 : cluster [DBG] osdmap e18: 3 total, 3 up, 3 in 2026-03-10T13:45:10.088 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:45:09 vm08 bash[23387]: cluster 2026-03-10T13:45:08.772904+0000 mon.a (mon.0) 394 : cluster [DBG] osdmap e18: 3 total, 3 up, 3 in 2026-03-10T13:45:10.088 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:45:09 vm08 bash[23387]: audit 2026-03-10T13:45:08.773020+0000 mon.a (mon.0) 395 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T13:45:10.088 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:45:09 vm08 bash[23387]: audit 2026-03-10T13:45:08.773020+0000 mon.a (mon.0) 395 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T13:45:10.088 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:45:09 vm08 bash[23387]: audit 2026-03-10T13:45:08.963030+0000 mon.a (mon.0) 396 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T13:45:10.088 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:45:09 vm08 bash[23387]: audit 2026-03-10T13:45:08.963030+0000 mon.a (mon.0) 396 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T13:45:10.088 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:45:09 vm08 bash[23387]: audit 2026-03-10T13:45:08.963666+0000 mon.a (mon.0) 397 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T13:45:10.088 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:45:09 vm08 bash[23387]: audit 2026-03-10T13:45:08.963666+0000 mon.a (mon.0) 397 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T13:45:10.088 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:45:09 vm08 bash[23387]: audit 2026-03-10T13:45:08.968288+0000 mon.a (mon.0) 398 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:45:10.088 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:45:09 vm08 bash[23387]: audit 2026-03-10T13:45:08.968288+0000 mon.a (mon.0) 398 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:45:10.088 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:45:09 vm08 bash[23387]: audit 2026-03-10T13:45:09.638561+0000 mon.a (mon.0) 399 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T13:45:10.088 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:45:09 vm08 bash[23387]: audit 2026-03-10T13:45:09.638561+0000 mon.a (mon.0) 399 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T13:45:10.088 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:45:09 vm08 bash[23387]: audit 2026-03-10T13:45:09.644459+0000 mon.a (mon.0) 400 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:45:10.088 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:45:09 vm08 bash[23387]: audit 2026-03-10T13:45:09.644459+0000 mon.a (mon.0) 400 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:45:10.088 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:45:09 vm08 bash[23387]: audit 2026-03-10T13:45:09.648268+0000 mon.a (mon.0) 401 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:45:10.088 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:45:09 vm08 bash[23387]: audit 2026-03-10T13:45:09.648268+0000 mon.a (mon.0) 401 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:45:10.216 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:45:09 vm00 bash[20748]: cluster 2026-03-10T13:45:06.989180+0000 osd.2 (osd.2) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-10T13:45:10.216 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:45:09 vm00 bash[20748]: cluster 2026-03-10T13:45:06.989180+0000 osd.2 (osd.2) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-10T13:45:10.216 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:45:09 vm00 bash[20748]: cluster 2026-03-10T13:45:06.989222+0000 osd.2 (osd.2) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-10T13:45:10.216 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:45:09 vm00 bash[20748]: cluster 2026-03-10T13:45:06.989222+0000 osd.2 (osd.2) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-10T13:45:10.216 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:45:09 vm00 bash[20748]: cluster 2026-03-10T13:45:08.215953+0000 mgr.a (mgr.14150) 133 : cluster [DBG] pgmap v86: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T13:45:10.216 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:45:09 vm00 bash[20748]: cluster 2026-03-10T13:45:08.215953+0000 mgr.a (mgr.14150) 133 : cluster [DBG] pgmap v86: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T13:45:10.216 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:45:09 vm00 bash[20748]: audit 2026-03-10T13:45:08.759452+0000 mon.a (mon.0) 392 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T13:45:10.216 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:45:09 vm00 bash[20748]: audit 2026-03-10T13:45:08.759452+0000 mon.a (mon.0) 392 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T13:45:10.216 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:45:09 vm00 bash[20748]: cluster 2026-03-10T13:45:08.772855+0000 mon.a (mon.0) 393 : cluster [INF] osd.2 [v2:192.168.123.108:6800/941417901,v1:192.168.123.108:6801/941417901] boot 2026-03-10T13:45:10.216 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:45:09 vm00 bash[20748]: cluster 2026-03-10T13:45:08.772855+0000 mon.a (mon.0) 393 : cluster [INF] osd.2 [v2:192.168.123.108:6800/941417901,v1:192.168.123.108:6801/941417901] boot 2026-03-10T13:45:10.217 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:45:09 vm00 bash[20748]: cluster 2026-03-10T13:45:08.772904+0000 mon.a (mon.0) 394 : cluster [DBG] osdmap e18: 3 total, 3 up, 3 in 2026-03-10T13:45:10.217 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:45:09 vm00 bash[20748]: cluster 2026-03-10T13:45:08.772904+0000 mon.a (mon.0) 394 : cluster [DBG] osdmap e18: 3 total, 3 up, 3 in 2026-03-10T13:45:10.217 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:45:09 vm00 bash[20748]: audit 2026-03-10T13:45:08.773020+0000 mon.a (mon.0) 395 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T13:45:10.217 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:45:09 vm00 bash[20748]: audit 2026-03-10T13:45:08.773020+0000 mon.a (mon.0) 395 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T13:45:10.217 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:45:09 vm00 bash[20748]: audit 2026-03-10T13:45:08.963030+0000 mon.a (mon.0) 396 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T13:45:10.217 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:45:09 vm00 bash[20748]: audit 2026-03-10T13:45:08.963030+0000 mon.a (mon.0) 396 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T13:45:10.217 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:45:09 vm00 bash[20748]: audit 2026-03-10T13:45:08.963666+0000 mon.a (mon.0) 397 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T13:45:10.217 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:45:09 vm00 bash[20748]: audit 2026-03-10T13:45:08.963666+0000 mon.a (mon.0) 397 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T13:45:10.217 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:45:09 vm00 bash[20748]: audit 2026-03-10T13:45:08.968288+0000 mon.a (mon.0) 398 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:45:10.217 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:45:09 vm00 bash[20748]: audit 2026-03-10T13:45:08.968288+0000 mon.a (mon.0) 398 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:45:10.217 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:45:09 vm00 bash[20748]: audit 2026-03-10T13:45:09.638561+0000 mon.a (mon.0) 399 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T13:45:10.217 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:45:09 vm00 bash[20748]: audit 2026-03-10T13:45:09.638561+0000 mon.a (mon.0) 399 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T13:45:10.217 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:45:09 vm00 bash[20748]: audit 2026-03-10T13:45:09.644459+0000 mon.a (mon.0) 400 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:45:10.217 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:45:09 vm00 bash[20748]: audit 2026-03-10T13:45:09.644459+0000 mon.a (mon.0) 400 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:45:10.217 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:45:09 vm00 bash[20748]: audit 2026-03-10T13:45:09.648268+0000 mon.a (mon.0) 401 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:45:10.217 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:45:09 vm00 bash[20748]: audit 2026-03-10T13:45:09.648268+0000 mon.a (mon.0) 401 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:45:10.249 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:45:09 vm07 bash[23044]: cluster 2026-03-10T13:45:06.989180+0000 osd.2 (osd.2) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-10T13:45:10.249 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:45:09 vm07 bash[23044]: cluster 2026-03-10T13:45:06.989180+0000 osd.2 (osd.2) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-10T13:45:10.249 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:45:09 vm07 bash[23044]: cluster 2026-03-10T13:45:06.989222+0000 osd.2 (osd.2) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-10T13:45:10.249 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:45:09 vm07 bash[23044]: cluster 2026-03-10T13:45:06.989222+0000 osd.2 (osd.2) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-10T13:45:10.249 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:45:09 vm07 bash[23044]: cluster 2026-03-10T13:45:08.215953+0000 mgr.a (mgr.14150) 133 : cluster [DBG] pgmap v86: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T13:45:10.249 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:45:09 vm07 bash[23044]: cluster 2026-03-10T13:45:08.215953+0000 mgr.a (mgr.14150) 133 : cluster [DBG] pgmap v86: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-10T13:45:10.249 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:45:09 vm07 bash[23044]: audit 2026-03-10T13:45:08.759452+0000 mon.a (mon.0) 392 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T13:45:10.249 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:45:09 vm07 bash[23044]: audit 2026-03-10T13:45:08.759452+0000 mon.a (mon.0) 392 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T13:45:10.249 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:45:09 vm07 bash[23044]: cluster 2026-03-10T13:45:08.772855+0000 mon.a (mon.0) 393 : cluster [INF] osd.2 [v2:192.168.123.108:6800/941417901,v1:192.168.123.108:6801/941417901] boot 2026-03-10T13:45:10.249 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:45:09 vm07 bash[23044]: cluster 2026-03-10T13:45:08.772855+0000 mon.a (mon.0) 393 : cluster [INF] osd.2 [v2:192.168.123.108:6800/941417901,v1:192.168.123.108:6801/941417901] boot 2026-03-10T13:45:10.249 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:45:09 vm07 bash[23044]: cluster 2026-03-10T13:45:08.772904+0000 mon.a (mon.0) 394 : cluster [DBG] osdmap e18: 3 total, 3 up, 3 in 2026-03-10T13:45:10.249 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:45:09 vm07 bash[23044]: cluster 2026-03-10T13:45:08.772904+0000 mon.a (mon.0) 394 : cluster [DBG] osdmap e18: 3 total, 3 up, 3 in 2026-03-10T13:45:10.250 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:45:09 vm07 bash[23044]: audit 2026-03-10T13:45:08.773020+0000 mon.a (mon.0) 395 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T13:45:10.250 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:45:09 vm07 bash[23044]: audit 2026-03-10T13:45:08.773020+0000 mon.a (mon.0) 395 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T13:45:10.250 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:45:09 vm07 bash[23044]: audit 2026-03-10T13:45:08.963030+0000 mon.a (mon.0) 396 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T13:45:10.250 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:45:09 vm07 bash[23044]: audit 2026-03-10T13:45:08.963030+0000 mon.a (mon.0) 396 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T13:45:10.250 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:45:09 vm07 bash[23044]: audit 2026-03-10T13:45:08.963666+0000 mon.a (mon.0) 397 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T13:45:10.250 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:45:09 vm07 bash[23044]: audit 2026-03-10T13:45:08.963666+0000 mon.a (mon.0) 397 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T13:45:10.250 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:45:09 vm07 bash[23044]: audit 2026-03-10T13:45:08.968288+0000 mon.a (mon.0) 398 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:45:10.250 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:45:09 vm07 bash[23044]: audit 2026-03-10T13:45:08.968288+0000 mon.a (mon.0) 398 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:45:10.250 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:45:09 vm07 bash[23044]: audit 2026-03-10T13:45:09.638561+0000 mon.a (mon.0) 399 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T13:45:10.250 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:45:09 vm07 bash[23044]: audit 2026-03-10T13:45:09.638561+0000 mon.a (mon.0) 399 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T13:45:10.250 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:45:09 vm07 bash[23044]: audit 2026-03-10T13:45:09.644459+0000 mon.a (mon.0) 400 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:45:10.250 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:45:09 vm07 bash[23044]: audit 2026-03-10T13:45:09.644459+0000 mon.a (mon.0) 400 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:45:10.250 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:45:09 vm07 bash[23044]: audit 2026-03-10T13:45:09.648268+0000 mon.a (mon.0) 401 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:45:10.250 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:45:09 vm07 bash[23044]: audit 2026-03-10T13:45:09.648268+0000 mon.a (mon.0) 401 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:45:11.249 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:45:10 vm07 bash[23044]: cluster 2026-03-10T13:45:09.973262+0000 mon.a (mon.0) 402 : cluster [DBG] osdmap e19: 3 total, 3 up, 3 in 2026-03-10T13:45:11.249 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:45:10 vm07 bash[23044]: cluster 2026-03-10T13:45:09.973262+0000 mon.a (mon.0) 402 : cluster [DBG] osdmap e19: 3 total, 3 up, 3 in 2026-03-10T13:45:11.249 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:45:10 vm07 bash[23044]: audit 2026-03-10T13:45:10.244345+0000 mon.a (mon.0) 403 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:45:11.249 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:45:10 vm07 bash[23044]: audit 2026-03-10T13:45:10.244345+0000 mon.a (mon.0) 403 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:45:11.337 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:45:10 vm08 bash[23387]: cluster 2026-03-10T13:45:09.973262+0000 mon.a (mon.0) 402 : cluster [DBG] osdmap e19: 3 total, 3 up, 3 in 2026-03-10T13:45:11.337 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:45:10 vm08 bash[23387]: cluster 2026-03-10T13:45:09.973262+0000 mon.a (mon.0) 402 : cluster [DBG] osdmap e19: 3 total, 3 up, 3 in 2026-03-10T13:45:11.337 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:45:10 vm08 bash[23387]: audit 2026-03-10T13:45:10.244345+0000 mon.a (mon.0) 403 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:45:11.337 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:45:10 vm08 bash[23387]: audit 2026-03-10T13:45:10.244345+0000 mon.a (mon.0) 403 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:45:11.466 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:45:10 vm00 bash[20748]: cluster 2026-03-10T13:45:09.973262+0000 mon.a (mon.0) 402 : cluster [DBG] osdmap e19: 3 total, 3 up, 3 in 2026-03-10T13:45:11.466 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:45:10 vm00 bash[20748]: cluster 2026-03-10T13:45:09.973262+0000 mon.a (mon.0) 402 : cluster [DBG] osdmap e19: 3 total, 3 up, 3 in 2026-03-10T13:45:11.466 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:45:10 vm00 bash[20748]: audit 2026-03-10T13:45:10.244345+0000 mon.a (mon.0) 403 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:45:11.466 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:45:10 vm00 bash[20748]: audit 2026-03-10T13:45:10.244345+0000 mon.a (mon.0) 403 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:45:12.249 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:45:11 vm07 bash[23044]: cluster 2026-03-10T13:45:10.216152+0000 mgr.a (mgr.14150) 134 : cluster [DBG] pgmap v89: 0 pgs: ; 0 B data, 79 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:45:12.249 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:45:11 vm07 bash[23044]: cluster 2026-03-10T13:45:10.216152+0000 mgr.a (mgr.14150) 134 : cluster [DBG] pgmap v89: 0 pgs: ; 0 B data, 79 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:45:12.249 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:45:11 vm07 bash[23044]: audit 2026-03-10T13:45:10.979260+0000 mon.a (mon.0) 404 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd='[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]': finished 2026-03-10T13:45:12.249 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:45:11 vm07 bash[23044]: audit 2026-03-10T13:45:10.979260+0000 mon.a (mon.0) 404 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd='[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]': finished 2026-03-10T13:45:12.249 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:45:11 vm07 bash[23044]: cluster 2026-03-10T13:45:10.982685+0000 mon.a (mon.0) 405 : cluster [DBG] osdmap e20: 3 total, 3 up, 3 in 2026-03-10T13:45:12.249 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:45:11 vm07 bash[23044]: cluster 2026-03-10T13:45:10.982685+0000 mon.a (mon.0) 405 : cluster [DBG] osdmap e20: 3 total, 3 up, 3 in 2026-03-10T13:45:12.249 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:45:11 vm07 bash[23044]: audit 2026-03-10T13:45:10.983245+0000 mon.a (mon.0) 406 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:45:12.249 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:45:11 vm07 bash[23044]: audit 2026-03-10T13:45:10.983245+0000 mon.a (mon.0) 406 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:45:12.337 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:45:11 vm08 bash[23387]: cluster 2026-03-10T13:45:10.216152+0000 mgr.a (mgr.14150) 134 : cluster [DBG] pgmap v89: 0 pgs: ; 0 B data, 79 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:45:12.337 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:45:11 vm08 bash[23387]: cluster 2026-03-10T13:45:10.216152+0000 mgr.a (mgr.14150) 134 : cluster [DBG] pgmap v89: 0 pgs: ; 0 B data, 79 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:45:12.337 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:45:11 vm08 bash[23387]: audit 2026-03-10T13:45:10.979260+0000 mon.a (mon.0) 404 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd='[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]': finished 2026-03-10T13:45:12.337 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:45:11 vm08 bash[23387]: audit 2026-03-10T13:45:10.979260+0000 mon.a (mon.0) 404 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd='[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]': finished 2026-03-10T13:45:12.337 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:45:12 vm08 bash[23387]: cluster 2026-03-10T13:45:10.982685+0000 mon.a (mon.0) 405 : cluster [DBG] osdmap e20: 3 total, 3 up, 3 in 2026-03-10T13:45:12.337 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:45:12 vm08 bash[23387]: cluster 2026-03-10T13:45:10.982685+0000 mon.a (mon.0) 405 : cluster [DBG] osdmap e20: 3 total, 3 up, 3 in 2026-03-10T13:45:12.337 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:45:12 vm08 bash[23387]: audit 2026-03-10T13:45:10.983245+0000 mon.a (mon.0) 406 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:45:12.338 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:45:12 vm08 bash[23387]: audit 2026-03-10T13:45:10.983245+0000 mon.a (mon.0) 406 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:45:12.466 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:45:11 vm00 bash[20748]: cluster 2026-03-10T13:45:10.216152+0000 mgr.a (mgr.14150) 134 : cluster [DBG] pgmap v89: 0 pgs: ; 0 B data, 79 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:45:12.466 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:45:11 vm00 bash[20748]: cluster 2026-03-10T13:45:10.216152+0000 mgr.a (mgr.14150) 134 : cluster [DBG] pgmap v89: 0 pgs: ; 0 B data, 79 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:45:12.467 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:45:11 vm00 bash[20748]: audit 2026-03-10T13:45:10.979260+0000 mon.a (mon.0) 404 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd='[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]': finished 2026-03-10T13:45:12.467 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:45:11 vm00 bash[20748]: audit 2026-03-10T13:45:10.979260+0000 mon.a (mon.0) 404 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd='[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]': finished 2026-03-10T13:45:12.467 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:45:11 vm00 bash[20748]: cluster 2026-03-10T13:45:10.982685+0000 mon.a (mon.0) 405 : cluster [DBG] osdmap e20: 3 total, 3 up, 3 in 2026-03-10T13:45:12.467 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:45:11 vm00 bash[20748]: cluster 2026-03-10T13:45:10.982685+0000 mon.a (mon.0) 405 : cluster [DBG] osdmap e20: 3 total, 3 up, 3 in 2026-03-10T13:45:12.467 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:45:11 vm00 bash[20748]: audit 2026-03-10T13:45:10.983245+0000 mon.a (mon.0) 406 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:45:12.467 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:45:11 vm00 bash[20748]: audit 2026-03-10T13:45:10.983245+0000 mon.a (mon.0) 406 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]: dispatch 2026-03-10T13:45:13.249 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:45:12 vm07 bash[23044]: audit 2026-03-10T13:45:11.982224+0000 mon.a (mon.0) 407 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd='[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]': finished 2026-03-10T13:45:13.249 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:45:12 vm07 bash[23044]: audit 2026-03-10T13:45:11.982224+0000 mon.a (mon.0) 407 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd='[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]': finished 2026-03-10T13:45:13.249 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:45:12 vm07 bash[23044]: cluster 2026-03-10T13:45:11.986181+0000 mon.a (mon.0) 408 : cluster [DBG] osdmap e21: 3 total, 3 up, 3 in 2026-03-10T13:45:13.249 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:45:12 vm07 bash[23044]: cluster 2026-03-10T13:45:11.986181+0000 mon.a (mon.0) 408 : cluster [DBG] osdmap e21: 3 total, 3 up, 3 in 2026-03-10T13:45:13.249 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:45:12 vm07 bash[23044]: audit 2026-03-10T13:45:12.304991+0000 mon.a (mon.0) 409 : audit [INF] from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch 2026-03-10T13:45:13.249 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:45:12 vm07 bash[23044]: audit 2026-03-10T13:45:12.304991+0000 mon.a (mon.0) 409 : audit [INF] from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch 2026-03-10T13:45:13.249 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:45:12 vm07 bash[23044]: audit 2026-03-10T13:45:12.323685+0000 mon.a (mon.0) 410 : audit [INF] from='admin socket' entity='admin socket' cmd=smart args=[json]: finished 2026-03-10T13:45:13.249 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:45:12 vm07 bash[23044]: audit 2026-03-10T13:45:12.323685+0000 mon.a (mon.0) 410 : audit [INF] from='admin socket' entity='admin socket' cmd=smart args=[json]: finished 2026-03-10T13:45:13.249 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:45:12 vm07 bash[23044]: audit 2026-03-10T13:45:12.323945+0000 mon.a (mon.0) 411 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-10T13:45:13.249 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:45:12 vm07 bash[23044]: audit 2026-03-10T13:45:12.323945+0000 mon.a (mon.0) 411 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-10T13:45:13.249 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:45:12 vm07 bash[23044]: audit 2026-03-10T13:45:12.324022+0000 mon.a (mon.0) 412 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T13:45:13.249 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:45:12 vm07 bash[23044]: audit 2026-03-10T13:45:12.324022+0000 mon.a (mon.0) 412 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T13:45:13.249 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:45:12 vm07 bash[23044]: audit 2026-03-10T13:45:12.324067+0000 mon.a (mon.0) 413 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T13:45:13.250 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:45:12 vm07 bash[23044]: audit 2026-03-10T13:45:12.324067+0000 mon.a (mon.0) 413 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T13:45:13.250 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:45:12 vm07 bash[23044]: audit 2026-03-10T13:45:12.325643+0000 mon.c (mon.1) 7 : audit [INF] from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch 2026-03-10T13:45:13.250 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:45:12 vm07 bash[23044]: audit 2026-03-10T13:45:12.325643+0000 mon.c (mon.1) 7 : audit [INF] from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch 2026-03-10T13:45:13.250 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:45:12 vm07 bash[23044]: audit 2026-03-10T13:45:12.325841+0000 mon.a (mon.0) 414 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-10T13:45:13.250 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:45:12 vm07 bash[23044]: audit 2026-03-10T13:45:12.325841+0000 mon.a (mon.0) 414 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-10T13:45:13.250 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:45:12 vm07 bash[23044]: audit 2026-03-10T13:45:12.325890+0000 mon.a (mon.0) 415 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T13:45:13.250 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:45:12 vm07 bash[23044]: audit 2026-03-10T13:45:12.325890+0000 mon.a (mon.0) 415 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T13:45:13.250 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:45:12 vm07 bash[23044]: audit 2026-03-10T13:45:12.325932+0000 mon.a (mon.0) 416 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T13:45:13.250 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:45:12 vm07 bash[23044]: audit 2026-03-10T13:45:12.325932+0000 mon.a (mon.0) 416 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T13:45:13.250 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:45:12 vm07 bash[23044]: audit 2026-03-10T13:45:12.342761+0000 mon.b (mon.2) 8 : audit [INF] from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch 2026-03-10T13:45:13.250 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:45:12 vm07 bash[23044]: audit 2026-03-10T13:45:12.342761+0000 mon.b (mon.2) 8 : audit [INF] from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch 2026-03-10T13:45:13.250 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:45:12 vm07 bash[23044]: audit 2026-03-10T13:45:12.342887+0000 mon.c (mon.1) 8 : audit [INF] from='admin socket' entity='admin socket' cmd=smart args=[json]: finished 2026-03-10T13:45:13.250 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:45:12 vm07 bash[23044]: audit 2026-03-10T13:45:12.342887+0000 mon.c (mon.1) 8 : audit [INF] from='admin socket' entity='admin socket' cmd=smart args=[json]: finished 2026-03-10T13:45:13.250 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:45:12 vm07 bash[23044]: audit 2026-03-10T13:45:12.344950+0000 mon.a (mon.0) 417 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-10T13:45:13.250 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:45:12 vm07 bash[23044]: audit 2026-03-10T13:45:12.344950+0000 mon.a (mon.0) 417 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-10T13:45:13.250 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:45:12 vm07 bash[23044]: audit 2026-03-10T13:45:12.345006+0000 mon.a (mon.0) 418 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T13:45:13.250 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:45:12 vm07 bash[23044]: audit 2026-03-10T13:45:12.345006+0000 mon.a (mon.0) 418 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T13:45:13.250 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:45:12 vm07 bash[23044]: audit 2026-03-10T13:45:12.345047+0000 mon.a (mon.0) 419 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T13:45:13.250 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:45:12 vm07 bash[23044]: audit 2026-03-10T13:45:12.345047+0000 mon.a (mon.0) 419 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T13:45:13.250 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:45:12 vm07 bash[23044]: audit 2026-03-10T13:45:12.359948+0000 mon.b (mon.2) 9 : audit [INF] from='admin socket' entity='admin socket' cmd=smart args=[json]: finished 2026-03-10T13:45:13.250 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:45:12 vm07 bash[23044]: audit 2026-03-10T13:45:12.359948+0000 mon.b (mon.2) 9 : audit [INF] from='admin socket' entity='admin socket' cmd=smart args=[json]: finished 2026-03-10T13:45:13.337 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:45:12 vm08 bash[23387]: audit 2026-03-10T13:45:11.982224+0000 mon.a (mon.0) 407 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd='[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]': finished 2026-03-10T13:45:13.337 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:45:12 vm08 bash[23387]: audit 2026-03-10T13:45:11.982224+0000 mon.a (mon.0) 407 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd='[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]': finished 2026-03-10T13:45:13.337 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:45:13 vm08 bash[23387]: cluster 2026-03-10T13:45:11.986181+0000 mon.a (mon.0) 408 : cluster [DBG] osdmap e21: 3 total, 3 up, 3 in 2026-03-10T13:45:13.337 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:45:13 vm08 bash[23387]: cluster 2026-03-10T13:45:11.986181+0000 mon.a (mon.0) 408 : cluster [DBG] osdmap e21: 3 total, 3 up, 3 in 2026-03-10T13:45:13.337 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:45:13 vm08 bash[23387]: audit 2026-03-10T13:45:12.304991+0000 mon.a (mon.0) 409 : audit [INF] from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch 2026-03-10T13:45:13.337 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:45:13 vm08 bash[23387]: audit 2026-03-10T13:45:12.304991+0000 mon.a (mon.0) 409 : audit [INF] from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch 2026-03-10T13:45:13.338 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:45:13 vm08 bash[23387]: audit 2026-03-10T13:45:12.323685+0000 mon.a (mon.0) 410 : audit [INF] from='admin socket' entity='admin socket' cmd=smart args=[json]: finished 2026-03-10T13:45:13.338 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:45:13 vm08 bash[23387]: audit 2026-03-10T13:45:12.323685+0000 mon.a (mon.0) 410 : audit [INF] from='admin socket' entity='admin socket' cmd=smart args=[json]: finished 2026-03-10T13:45:13.338 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:45:13 vm08 bash[23387]: audit 2026-03-10T13:45:12.323945+0000 mon.a (mon.0) 411 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-10T13:45:13.338 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:45:13 vm08 bash[23387]: audit 2026-03-10T13:45:12.323945+0000 mon.a (mon.0) 411 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-10T13:45:13.338 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:45:13 vm08 bash[23387]: audit 2026-03-10T13:45:12.324022+0000 mon.a (mon.0) 412 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T13:45:13.338 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:45:13 vm08 bash[23387]: audit 2026-03-10T13:45:12.324022+0000 mon.a (mon.0) 412 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T13:45:13.338 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:45:13 vm08 bash[23387]: audit 2026-03-10T13:45:12.324067+0000 mon.a (mon.0) 413 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T13:45:13.338 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:45:13 vm08 bash[23387]: audit 2026-03-10T13:45:12.324067+0000 mon.a (mon.0) 413 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T13:45:13.338 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:45:13 vm08 bash[23387]: audit 2026-03-10T13:45:12.325643+0000 mon.c (mon.1) 7 : audit [INF] from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch 2026-03-10T13:45:13.338 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:45:13 vm08 bash[23387]: audit 2026-03-10T13:45:12.325643+0000 mon.c (mon.1) 7 : audit [INF] from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch 2026-03-10T13:45:13.338 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:45:13 vm08 bash[23387]: audit 2026-03-10T13:45:12.325841+0000 mon.a (mon.0) 414 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-10T13:45:13.338 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:45:13 vm08 bash[23387]: audit 2026-03-10T13:45:12.325841+0000 mon.a (mon.0) 414 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-10T13:45:13.338 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:45:13 vm08 bash[23387]: audit 2026-03-10T13:45:12.325890+0000 mon.a (mon.0) 415 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T13:45:13.338 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:45:13 vm08 bash[23387]: audit 2026-03-10T13:45:12.325890+0000 mon.a (mon.0) 415 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T13:45:13.338 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:45:13 vm08 bash[23387]: audit 2026-03-10T13:45:12.325932+0000 mon.a (mon.0) 416 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T13:45:13.338 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:45:13 vm08 bash[23387]: audit 2026-03-10T13:45:12.325932+0000 mon.a (mon.0) 416 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T13:45:13.338 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:45:13 vm08 bash[23387]: audit 2026-03-10T13:45:12.342761+0000 mon.b (mon.2) 8 : audit [INF] from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch 2026-03-10T13:45:13.338 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:45:13 vm08 bash[23387]: audit 2026-03-10T13:45:12.342761+0000 mon.b (mon.2) 8 : audit [INF] from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch 2026-03-10T13:45:13.338 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:45:13 vm08 bash[23387]: audit 2026-03-10T13:45:12.342887+0000 mon.c (mon.1) 8 : audit [INF] from='admin socket' entity='admin socket' cmd=smart args=[json]: finished 2026-03-10T13:45:13.338 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:45:13 vm08 bash[23387]: audit 2026-03-10T13:45:12.342887+0000 mon.c (mon.1) 8 : audit [INF] from='admin socket' entity='admin socket' cmd=smart args=[json]: finished 2026-03-10T13:45:13.338 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:45:13 vm08 bash[23387]: audit 2026-03-10T13:45:12.344950+0000 mon.a (mon.0) 417 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-10T13:45:13.338 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:45:13 vm08 bash[23387]: audit 2026-03-10T13:45:12.344950+0000 mon.a (mon.0) 417 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-10T13:45:13.338 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:45:13 vm08 bash[23387]: audit 2026-03-10T13:45:12.345006+0000 mon.a (mon.0) 418 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T13:45:13.338 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:45:13 vm08 bash[23387]: audit 2026-03-10T13:45:12.345006+0000 mon.a (mon.0) 418 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T13:45:13.338 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:45:13 vm08 bash[23387]: audit 2026-03-10T13:45:12.345047+0000 mon.a (mon.0) 419 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T13:45:13.338 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:45:13 vm08 bash[23387]: audit 2026-03-10T13:45:12.345047+0000 mon.a (mon.0) 419 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T13:45:13.338 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:45:13 vm08 bash[23387]: audit 2026-03-10T13:45:12.359948+0000 mon.b (mon.2) 9 : audit [INF] from='admin socket' entity='admin socket' cmd=smart args=[json]: finished 2026-03-10T13:45:13.338 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:45:13 vm08 bash[23387]: audit 2026-03-10T13:45:12.359948+0000 mon.b (mon.2) 9 : audit [INF] from='admin socket' entity='admin socket' cmd=smart args=[json]: finished 2026-03-10T13:45:13.466 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:45:12 vm00 bash[20748]: audit 2026-03-10T13:45:11.982224+0000 mon.a (mon.0) 407 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd='[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]': finished 2026-03-10T13:45:13.466 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:45:12 vm00 bash[20748]: audit 2026-03-10T13:45:11.982224+0000 mon.a (mon.0) 407 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd='[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]': finished 2026-03-10T13:45:13.467 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:45:12 vm00 bash[20748]: cluster 2026-03-10T13:45:11.986181+0000 mon.a (mon.0) 408 : cluster [DBG] osdmap e21: 3 total, 3 up, 3 in 2026-03-10T13:45:13.467 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:45:12 vm00 bash[20748]: cluster 2026-03-10T13:45:11.986181+0000 mon.a (mon.0) 408 : cluster [DBG] osdmap e21: 3 total, 3 up, 3 in 2026-03-10T13:45:13.467 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:45:12 vm00 bash[20748]: audit 2026-03-10T13:45:12.304991+0000 mon.a (mon.0) 409 : audit [INF] from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch 2026-03-10T13:45:13.467 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:45:12 vm00 bash[20748]: audit 2026-03-10T13:45:12.304991+0000 mon.a (mon.0) 409 : audit [INF] from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch 2026-03-10T13:45:13.467 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:45:12 vm00 bash[20748]: audit 2026-03-10T13:45:12.323685+0000 mon.a (mon.0) 410 : audit [INF] from='admin socket' entity='admin socket' cmd=smart args=[json]: finished 2026-03-10T13:45:13.467 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:45:12 vm00 bash[20748]: audit 2026-03-10T13:45:12.323685+0000 mon.a (mon.0) 410 : audit [INF] from='admin socket' entity='admin socket' cmd=smart args=[json]: finished 2026-03-10T13:45:13.467 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:45:12 vm00 bash[20748]: audit 2026-03-10T13:45:12.323945+0000 mon.a (mon.0) 411 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-10T13:45:13.467 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:45:12 vm00 bash[20748]: audit 2026-03-10T13:45:12.323945+0000 mon.a (mon.0) 411 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-10T13:45:13.467 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:45:13 vm00 bash[20748]: audit 2026-03-10T13:45:12.324022+0000 mon.a (mon.0) 412 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T13:45:13.467 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:45:13 vm00 bash[20748]: audit 2026-03-10T13:45:12.324022+0000 mon.a (mon.0) 412 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T13:45:13.467 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:45:13 vm00 bash[20748]: audit 2026-03-10T13:45:12.324067+0000 mon.a (mon.0) 413 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T13:45:13.467 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:45:13 vm00 bash[20748]: audit 2026-03-10T13:45:12.324067+0000 mon.a (mon.0) 413 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T13:45:13.467 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:45:13 vm00 bash[20748]: audit 2026-03-10T13:45:12.325643+0000 mon.c (mon.1) 7 : audit [INF] from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch 2026-03-10T13:45:13.467 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:45:13 vm00 bash[20748]: audit 2026-03-10T13:45:12.325643+0000 mon.c (mon.1) 7 : audit [INF] from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch 2026-03-10T13:45:13.467 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:45:13 vm00 bash[20748]: audit 2026-03-10T13:45:12.325841+0000 mon.a (mon.0) 414 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-10T13:45:13.467 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:45:13 vm00 bash[20748]: audit 2026-03-10T13:45:12.325841+0000 mon.a (mon.0) 414 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-10T13:45:13.467 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:45:13 vm00 bash[20748]: audit 2026-03-10T13:45:12.325890+0000 mon.a (mon.0) 415 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T13:45:13.467 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:45:13 vm00 bash[20748]: audit 2026-03-10T13:45:12.325890+0000 mon.a (mon.0) 415 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T13:45:13.467 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:45:13 vm00 bash[20748]: audit 2026-03-10T13:45:12.325932+0000 mon.a (mon.0) 416 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T13:45:13.467 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:45:13 vm00 bash[20748]: audit 2026-03-10T13:45:12.325932+0000 mon.a (mon.0) 416 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T13:45:13.467 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:45:13 vm00 bash[20748]: audit 2026-03-10T13:45:12.342761+0000 mon.b (mon.2) 8 : audit [INF] from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch 2026-03-10T13:45:13.467 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:45:13 vm00 bash[20748]: audit 2026-03-10T13:45:12.342761+0000 mon.b (mon.2) 8 : audit [INF] from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch 2026-03-10T13:45:13.467 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:45:13 vm00 bash[20748]: audit 2026-03-10T13:45:12.342887+0000 mon.c (mon.1) 8 : audit [INF] from='admin socket' entity='admin socket' cmd=smart args=[json]: finished 2026-03-10T13:45:13.467 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:45:13 vm00 bash[20748]: audit 2026-03-10T13:45:12.342887+0000 mon.c (mon.1) 8 : audit [INF] from='admin socket' entity='admin socket' cmd=smart args=[json]: finished 2026-03-10T13:45:13.467 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:45:13 vm00 bash[20748]: audit 2026-03-10T13:45:12.344950+0000 mon.a (mon.0) 417 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-10T13:45:13.467 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:45:13 vm00 bash[20748]: audit 2026-03-10T13:45:12.344950+0000 mon.a (mon.0) 417 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-10T13:45:13.467 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:45:13 vm00 bash[20748]: audit 2026-03-10T13:45:12.345006+0000 mon.a (mon.0) 418 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T13:45:13.467 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:45:13 vm00 bash[20748]: audit 2026-03-10T13:45:12.345006+0000 mon.a (mon.0) 418 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T13:45:13.467 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:45:13 vm00 bash[20748]: audit 2026-03-10T13:45:12.345047+0000 mon.a (mon.0) 419 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T13:45:13.467 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:45:13 vm00 bash[20748]: audit 2026-03-10T13:45:12.345047+0000 mon.a (mon.0) 419 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T13:45:13.467 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:45:13 vm00 bash[20748]: audit 2026-03-10T13:45:12.359948+0000 mon.b (mon.2) 9 : audit [INF] from='admin socket' entity='admin socket' cmd=smart args=[json]: finished 2026-03-10T13:45:13.467 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:45:13 vm00 bash[20748]: audit 2026-03-10T13:45:12.359948+0000 mon.b (mon.2) 9 : audit [INF] from='admin socket' entity='admin socket' cmd=smart args=[json]: finished 2026-03-10T13:45:14.337 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:45:14 vm08 bash[23387]: cluster 2026-03-10T13:45:12.216449+0000 mgr.a (mgr.14150) 135 : cluster [DBG] pgmap v92: 1 pgs: 1 unknown; 0 B data, 79 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:45:14.337 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:45:14 vm08 bash[23387]: cluster 2026-03-10T13:45:12.216449+0000 mgr.a (mgr.14150) 135 : cluster [DBG] pgmap v92: 1 pgs: 1 unknown; 0 B data, 79 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:45:14.337 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:45:14 vm08 bash[23387]: cluster 2026-03-10T13:45:13.004589+0000 mon.a (mon.0) 420 : cluster [DBG] osdmap e22: 3 total, 3 up, 3 in 2026-03-10T13:45:14.337 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:45:14 vm08 bash[23387]: cluster 2026-03-10T13:45:13.004589+0000 mon.a (mon.0) 420 : cluster [DBG] osdmap e22: 3 total, 3 up, 3 in 2026-03-10T13:45:14.338 INFO:teuthology.orchestra.run.vm00.stderr:Inferring config /var/lib/ceph/c9620084-1c86-11f1-bcc5-e3fb709eab0a/mon.a/config 2026-03-10T13:45:14.394 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:45:14 vm00 bash[20748]: cluster 2026-03-10T13:45:12.216449+0000 mgr.a (mgr.14150) 135 : cluster [DBG] pgmap v92: 1 pgs: 1 unknown; 0 B data, 79 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:45:14.395 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:45:14 vm00 bash[20748]: cluster 2026-03-10T13:45:12.216449+0000 mgr.a (mgr.14150) 135 : cluster [DBG] pgmap v92: 1 pgs: 1 unknown; 0 B data, 79 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:45:14.395 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:45:14 vm00 bash[20748]: cluster 2026-03-10T13:45:13.004589+0000 mon.a (mon.0) 420 : cluster [DBG] osdmap e22: 3 total, 3 up, 3 in 2026-03-10T13:45:14.395 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:45:14 vm00 bash[20748]: cluster 2026-03-10T13:45:13.004589+0000 mon.a (mon.0) 420 : cluster [DBG] osdmap e22: 3 total, 3 up, 3 in 2026-03-10T13:45:14.499 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:45:14 vm07 bash[23044]: cluster 2026-03-10T13:45:12.216449+0000 mgr.a (mgr.14150) 135 : cluster [DBG] pgmap v92: 1 pgs: 1 unknown; 0 B data, 79 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:45:14.499 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:45:14 vm07 bash[23044]: cluster 2026-03-10T13:45:12.216449+0000 mgr.a (mgr.14150) 135 : cluster [DBG] pgmap v92: 1 pgs: 1 unknown; 0 B data, 79 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:45:14.499 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:45:14 vm07 bash[23044]: cluster 2026-03-10T13:45:13.004589+0000 mon.a (mon.0) 420 : cluster [DBG] osdmap e22: 3 total, 3 up, 3 in 2026-03-10T13:45:14.499 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:45:14 vm07 bash[23044]: cluster 2026-03-10T13:45:13.004589+0000 mon.a (mon.0) 420 : cluster [DBG] osdmap e22: 3 total, 3 up, 3 in 2026-03-10T13:45:14.590 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-10T13:45:14.640 INFO:teuthology.orchestra.run.vm00.stdout:{"epoch":22,"num_osds":3,"num_up_osds":3,"osd_up_since":1773150308,"num_in_osds":3,"osd_in_since":1773150292,"num_remapped_pgs":0} 2026-03-10T13:45:14.640 DEBUG:teuthology.orchestra.run.vm00:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid c9620084-1c86-11f1-bcc5-e3fb709eab0a -- ceph osd dump --format=json 2026-03-10T13:45:15.337 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:45:15 vm08 bash[23387]: audit 2026-03-10T13:45:14.591167+0000 mon.a (mon.0) 421 : audit [DBG] from='client.? 192.168.123.100:0/1448662109' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json"}]: dispatch 2026-03-10T13:45:15.337 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:45:15 vm08 bash[23387]: audit 2026-03-10T13:45:14.591167+0000 mon.a (mon.0) 421 : audit [DBG] from='client.? 192.168.123.100:0/1448662109' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json"}]: dispatch 2026-03-10T13:45:15.466 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:45:15 vm00 bash[20748]: audit 2026-03-10T13:45:14.591167+0000 mon.a (mon.0) 421 : audit [DBG] from='client.? 192.168.123.100:0/1448662109' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json"}]: dispatch 2026-03-10T13:45:15.466 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:45:15 vm00 bash[20748]: audit 2026-03-10T13:45:14.591167+0000 mon.a (mon.0) 421 : audit [DBG] from='client.? 192.168.123.100:0/1448662109' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json"}]: dispatch 2026-03-10T13:45:15.499 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:45:15 vm07 bash[23044]: audit 2026-03-10T13:45:14.591167+0000 mon.a (mon.0) 421 : audit [DBG] from='client.? 192.168.123.100:0/1448662109' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json"}]: dispatch 2026-03-10T13:45:15.499 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:45:15 vm07 bash[23044]: audit 2026-03-10T13:45:14.591167+0000 mon.a (mon.0) 421 : audit [DBG] from='client.? 192.168.123.100:0/1448662109' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json"}]: dispatch 2026-03-10T13:45:16.337 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:45:16 vm08 bash[23387]: cluster 2026-03-10T13:45:14.216689+0000 mgr.a (mgr.14150) 136 : cluster [DBG] pgmap v94: 1 pgs: 1 unknown; 0 B data, 79 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:45:16.337 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:45:16 vm08 bash[23387]: cluster 2026-03-10T13:45:14.216689+0000 mgr.a (mgr.14150) 136 : cluster [DBG] pgmap v94: 1 pgs: 1 unknown; 0 B data, 79 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:45:16.337 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:45:16 vm08 bash[23387]: cluster 2026-03-10T13:45:15.017373+0000 mon.a (mon.0) 422 : cluster [DBG] mgrmap e14: a(active, since 2m), standbys: b 2026-03-10T13:45:16.337 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:45:16 vm08 bash[23387]: cluster 2026-03-10T13:45:15.017373+0000 mon.a (mon.0) 422 : cluster [DBG] mgrmap e14: a(active, since 2m), standbys: b 2026-03-10T13:45:16.337 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:45:16 vm08 bash[23387]: audit 2026-03-10T13:45:15.135567+0000 mon.a (mon.0) 423 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:45:16.337 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:45:16 vm08 bash[23387]: audit 2026-03-10T13:45:15.135567+0000 mon.a (mon.0) 423 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:45:16.337 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:45:16 vm08 bash[23387]: audit 2026-03-10T13:45:15.138850+0000 mon.a (mon.0) 424 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:45:16.337 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:45:16 vm08 bash[23387]: audit 2026-03-10T13:45:15.138850+0000 mon.a (mon.0) 424 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:45:16.337 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:45:16 vm08 bash[23387]: audit 2026-03-10T13:45:15.139418+0000 mon.a (mon.0) 425 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"}]: dispatch 2026-03-10T13:45:16.337 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:45:16 vm08 bash[23387]: audit 2026-03-10T13:45:15.139418+0000 mon.a (mon.0) 425 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"}]: dispatch 2026-03-10T13:45:16.337 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:45:16 vm08 bash[23387]: audit 2026-03-10T13:45:15.142130+0000 mon.a (mon.0) 426 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:45:16.337 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:45:16 vm08 bash[23387]: audit 2026-03-10T13:45:15.142130+0000 mon.a (mon.0) 426 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:45:16.337 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:45:16 vm08 bash[23387]: audit 2026-03-10T13:45:15.143089+0000 mon.a (mon.0) 427 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T13:45:16.338 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:45:16 vm08 bash[23387]: audit 2026-03-10T13:45:15.143089+0000 mon.a (mon.0) 427 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T13:45:16.338 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:45:16 vm08 bash[23387]: audit 2026-03-10T13:45:15.143456+0000 mon.a (mon.0) 428 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T13:45:16.338 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:45:16 vm08 bash[23387]: audit 2026-03-10T13:45:15.143456+0000 mon.a (mon.0) 428 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T13:45:16.338 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:45:16 vm08 bash[23387]: audit 2026-03-10T13:45:15.146329+0000 mon.a (mon.0) 429 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:45:16.338 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:45:16 vm08 bash[23387]: audit 2026-03-10T13:45:15.146329+0000 mon.a (mon.0) 429 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:45:16.467 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:45:16 vm00 bash[20748]: cluster 2026-03-10T13:45:14.216689+0000 mgr.a (mgr.14150) 136 : cluster [DBG] pgmap v94: 1 pgs: 1 unknown; 0 B data, 79 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:45:16.467 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:45:16 vm00 bash[20748]: cluster 2026-03-10T13:45:14.216689+0000 mgr.a (mgr.14150) 136 : cluster [DBG] pgmap v94: 1 pgs: 1 unknown; 0 B data, 79 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:45:16.467 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:45:16 vm00 bash[20748]: cluster 2026-03-10T13:45:15.017373+0000 mon.a (mon.0) 422 : cluster [DBG] mgrmap e14: a(active, since 2m), standbys: b 2026-03-10T13:45:16.467 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:45:16 vm00 bash[20748]: cluster 2026-03-10T13:45:15.017373+0000 mon.a (mon.0) 422 : cluster [DBG] mgrmap e14: a(active, since 2m), standbys: b 2026-03-10T13:45:16.467 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:45:16 vm00 bash[20748]: audit 2026-03-10T13:45:15.135567+0000 mon.a (mon.0) 423 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:45:16.467 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:45:16 vm00 bash[20748]: audit 2026-03-10T13:45:15.135567+0000 mon.a (mon.0) 423 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:45:16.467 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:45:16 vm00 bash[20748]: audit 2026-03-10T13:45:15.138850+0000 mon.a (mon.0) 424 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:45:16.467 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:45:16 vm00 bash[20748]: audit 2026-03-10T13:45:15.138850+0000 mon.a (mon.0) 424 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:45:16.467 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:45:16 vm00 bash[20748]: audit 2026-03-10T13:45:15.139418+0000 mon.a (mon.0) 425 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"}]: dispatch 2026-03-10T13:45:16.467 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:45:16 vm00 bash[20748]: audit 2026-03-10T13:45:15.139418+0000 mon.a (mon.0) 425 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"}]: dispatch 2026-03-10T13:45:16.467 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:45:16 vm00 bash[20748]: audit 2026-03-10T13:45:15.142130+0000 mon.a (mon.0) 426 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:45:16.467 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:45:16 vm00 bash[20748]: audit 2026-03-10T13:45:15.142130+0000 mon.a (mon.0) 426 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:45:16.467 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:45:16 vm00 bash[20748]: audit 2026-03-10T13:45:15.143089+0000 mon.a (mon.0) 427 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T13:45:16.467 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:45:16 vm00 bash[20748]: audit 2026-03-10T13:45:15.143089+0000 mon.a (mon.0) 427 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T13:45:16.467 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:45:16 vm00 bash[20748]: audit 2026-03-10T13:45:15.143456+0000 mon.a (mon.0) 428 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T13:45:16.467 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:45:16 vm00 bash[20748]: audit 2026-03-10T13:45:15.143456+0000 mon.a (mon.0) 428 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T13:45:16.467 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:45:16 vm00 bash[20748]: audit 2026-03-10T13:45:15.146329+0000 mon.a (mon.0) 429 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:45:16.467 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:45:16 vm00 bash[20748]: audit 2026-03-10T13:45:15.146329+0000 mon.a (mon.0) 429 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:45:16.499 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:45:16 vm07 bash[23044]: cluster 2026-03-10T13:45:14.216689+0000 mgr.a (mgr.14150) 136 : cluster [DBG] pgmap v94: 1 pgs: 1 unknown; 0 B data, 79 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:45:16.499 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:45:16 vm07 bash[23044]: cluster 2026-03-10T13:45:14.216689+0000 mgr.a (mgr.14150) 136 : cluster [DBG] pgmap v94: 1 pgs: 1 unknown; 0 B data, 79 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:45:16.499 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:45:16 vm07 bash[23044]: cluster 2026-03-10T13:45:15.017373+0000 mon.a (mon.0) 422 : cluster [DBG] mgrmap e14: a(active, since 2m), standbys: b 2026-03-10T13:45:16.499 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:45:16 vm07 bash[23044]: cluster 2026-03-10T13:45:15.017373+0000 mon.a (mon.0) 422 : cluster [DBG] mgrmap e14: a(active, since 2m), standbys: b 2026-03-10T13:45:16.499 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:45:16 vm07 bash[23044]: audit 2026-03-10T13:45:15.135567+0000 mon.a (mon.0) 423 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:45:16.499 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:45:16 vm07 bash[23044]: audit 2026-03-10T13:45:15.135567+0000 mon.a (mon.0) 423 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:45:16.499 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:45:16 vm07 bash[23044]: audit 2026-03-10T13:45:15.138850+0000 mon.a (mon.0) 424 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:45:16.499 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:45:16 vm07 bash[23044]: audit 2026-03-10T13:45:15.138850+0000 mon.a (mon.0) 424 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:45:16.499 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:45:16 vm07 bash[23044]: audit 2026-03-10T13:45:15.139418+0000 mon.a (mon.0) 425 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"}]: dispatch 2026-03-10T13:45:16.499 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:45:16 vm07 bash[23044]: audit 2026-03-10T13:45:15.139418+0000 mon.a (mon.0) 425 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"}]: dispatch 2026-03-10T13:45:16.499 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:45:16 vm07 bash[23044]: audit 2026-03-10T13:45:15.142130+0000 mon.a (mon.0) 426 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:45:16.499 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:45:16 vm07 bash[23044]: audit 2026-03-10T13:45:15.142130+0000 mon.a (mon.0) 426 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:45:16.499 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:45:16 vm07 bash[23044]: audit 2026-03-10T13:45:15.143089+0000 mon.a (mon.0) 427 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T13:45:16.499 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:45:16 vm07 bash[23044]: audit 2026-03-10T13:45:15.143089+0000 mon.a (mon.0) 427 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T13:45:16.499 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:45:16 vm07 bash[23044]: audit 2026-03-10T13:45:15.143456+0000 mon.a (mon.0) 428 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T13:45:16.499 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:45:16 vm07 bash[23044]: audit 2026-03-10T13:45:15.143456+0000 mon.a (mon.0) 428 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T13:45:16.499 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:45:16 vm07 bash[23044]: audit 2026-03-10T13:45:15.146329+0000 mon.a (mon.0) 429 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:45:16.499 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:45:16 vm07 bash[23044]: audit 2026-03-10T13:45:15.146329+0000 mon.a (mon.0) 429 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:45:17.337 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:45:17 vm08 bash[23387]: cephadm 2026-03-10T13:45:15.130480+0000 mgr.a (mgr.14150) 137 : cephadm [INF] Detected new or changed devices on vm08 2026-03-10T13:45:17.337 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:45:17 vm08 bash[23387]: cephadm 2026-03-10T13:45:15.130480+0000 mgr.a (mgr.14150) 137 : cephadm [INF] Detected new or changed devices on vm08 2026-03-10T13:45:17.337 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:45:17 vm08 bash[23387]: cephadm 2026-03-10T13:45:15.139807+0000 mgr.a (mgr.14150) 138 : cephadm [INF] Adjusting osd_memory_target on vm08 to 4551M 2026-03-10T13:45:17.337 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:45:17 vm08 bash[23387]: cephadm 2026-03-10T13:45:15.139807+0000 mgr.a (mgr.14150) 138 : cephadm [INF] Adjusting osd_memory_target on vm08 to 4551M 2026-03-10T13:45:17.466 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:45:17 vm00 bash[20748]: cephadm 2026-03-10T13:45:15.130480+0000 mgr.a (mgr.14150) 137 : cephadm [INF] Detected new or changed devices on vm08 2026-03-10T13:45:17.466 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:45:17 vm00 bash[20748]: cephadm 2026-03-10T13:45:15.130480+0000 mgr.a (mgr.14150) 137 : cephadm [INF] Detected new or changed devices on vm08 2026-03-10T13:45:17.466 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:45:17 vm00 bash[20748]: cephadm 2026-03-10T13:45:15.139807+0000 mgr.a (mgr.14150) 138 : cephadm [INF] Adjusting osd_memory_target on vm08 to 4551M 2026-03-10T13:45:17.466 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:45:17 vm00 bash[20748]: cephadm 2026-03-10T13:45:15.139807+0000 mgr.a (mgr.14150) 138 : cephadm [INF] Adjusting osd_memory_target on vm08 to 4551M 2026-03-10T13:45:17.499 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:45:17 vm07 bash[23044]: cephadm 2026-03-10T13:45:15.130480+0000 mgr.a (mgr.14150) 137 : cephadm [INF] Detected new or changed devices on vm08 2026-03-10T13:45:17.499 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:45:17 vm07 bash[23044]: cephadm 2026-03-10T13:45:15.130480+0000 mgr.a (mgr.14150) 137 : cephadm [INF] Detected new or changed devices on vm08 2026-03-10T13:45:17.499 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:45:17 vm07 bash[23044]: cephadm 2026-03-10T13:45:15.139807+0000 mgr.a (mgr.14150) 138 : cephadm [INF] Adjusting osd_memory_target on vm08 to 4551M 2026-03-10T13:45:17.499 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:45:17 vm07 bash[23044]: cephadm 2026-03-10T13:45:15.139807+0000 mgr.a (mgr.14150) 138 : cephadm [INF] Adjusting osd_memory_target on vm08 to 4551M 2026-03-10T13:45:18.337 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:45:18 vm08 bash[23387]: cluster 2026-03-10T13:45:16.216887+0000 mgr.a (mgr.14150) 139 : cluster [DBG] pgmap v95: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:45:18.337 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:45:18 vm08 bash[23387]: cluster 2026-03-10T13:45:16.216887+0000 mgr.a (mgr.14150) 139 : cluster [DBG] pgmap v95: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:45:18.348 INFO:teuthology.orchestra.run.vm00.stderr:Inferring config /var/lib/ceph/c9620084-1c86-11f1-bcc5-e3fb709eab0a/mon.a/config 2026-03-10T13:45:18.361 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:45:18 vm00 bash[20748]: cluster 2026-03-10T13:45:16.216887+0000 mgr.a (mgr.14150) 139 : cluster [DBG] pgmap v95: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:45:18.361 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:45:18 vm00 bash[20748]: cluster 2026-03-10T13:45:16.216887+0000 mgr.a (mgr.14150) 139 : cluster [DBG] pgmap v95: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:45:18.499 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:45:18 vm07 bash[23044]: cluster 2026-03-10T13:45:16.216887+0000 mgr.a (mgr.14150) 139 : cluster [DBG] pgmap v95: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:45:18.499 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:45:18 vm07 bash[23044]: cluster 2026-03-10T13:45:16.216887+0000 mgr.a (mgr.14150) 139 : cluster [DBG] pgmap v95: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:45:18.600 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-10T13:45:18.600 INFO:teuthology.orchestra.run.vm00.stdout:{"epoch":22,"fsid":"c9620084-1c86-11f1-bcc5-e3fb709eab0a","created":"2026-03-10T13:42:08.642186+0000","modified":"2026-03-10T13:45:12.992231+0000","last_up_change":"2026-03-10T13:45:08.757720+0000","last_in_change":"2026-03-10T13:44:52.873153+0000","flags":"sortbitwise,recovery_deletes,purged_snapdirs,pglog_hardlimit","flags_num":5799936,"flags_set":["pglog_hardlimit","purged_snapdirs","recovery_deletes","sortbitwise"],"crush_version":8,"full_ratio":0.94999998807907104,"backfillfull_ratio":0.89999997615814209,"nearfull_ratio":0.85000002384185791,"cluster_snapshot":"","pool_max":1,"max_osd":3,"require_min_compat_client":"luminous","min_compat_client":"jewel","require_osd_release":"squid","allow_crimson":false,"pools":[{"pool":1,"pool_name":".mgr","create_time":"2026-03-10T13:45:10.246821+0000","flags":1,"flags_names":"hashpspool","type":1,"size":3,"min_size":2,"crush_rule":0,"peering_crush_bucket_count":0,"peering_crush_bucket_target":0,"peering_crush_bucket_barrier":0,"peering_crush_bucket_mandatory_member":2147483647,"is_stretch_pool":false,"object_hash":2,"pg_autoscale_mode":"off","pg_num":1,"pg_placement_num":1,"pg_placement_num_target":1,"pg_num_target":1,"pg_num_pending":1,"last_pg_merge_meta":{"source_pgid":"0.0","ready_epoch":0,"last_epoch_started":0,"last_epoch_clean":0,"source_version":"0'0","target_version":"0'0"},"last_change":"22","last_force_op_resend":"0","last_force_op_resend_prenautilus":"0","last_force_op_resend_preluminous":"0","auid":0,"snap_mode":"selfmanaged","snap_seq":0,"snap_epoch":0,"pool_snaps":[],"removed_snaps":"[]","quota_max_bytes":0,"quota_max_objects":0,"tiers":[],"tier_of":-1,"read_tier":-1,"write_tier":-1,"cache_mode":"none","target_max_bytes":0,"target_max_objects":0,"cache_target_dirty_ratio_micro":400000,"cache_target_dirty_high_ratio_micro":600000,"cache_target_full_ratio_micro":800000,"cache_min_flush_age":0,"cache_min_evict_age":0,"erasure_code_profile":"","hit_set_params":{"type":"none"},"hit_set_period":0,"hit_set_count":0,"use_gmt_hitset":true,"min_read_recency_for_promote":0,"min_write_recency_for_promote":0,"hit_set_grade_decay_rate":0,"hit_set_search_last_n":0,"grade_table":[],"stripe_width":0,"expected_num_objects":0,"fast_read":false,"options":{"pg_num_max":32,"pg_num_min":1},"application_metadata":{"mgr":{}},"read_balance":{"score_type":"Fair distribution","score_acting":3,"score_stable":3,"optimal_score":1,"raw_score_acting":3,"raw_score_stable":3,"primary_affinity_weighted":1,"average_primary_affinity":1,"average_primary_affinity_weighted":1}}],"osds":[{"osd":0,"uuid":"d6acd3f9-435e-414f-ba14-3aa55444aaaf","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":8,"up_thru":0,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.100:6802","nonce":430820835},{"type":"v1","addr":"192.168.123.100:6803","nonce":430820835}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.100:6804","nonce":430820835},{"type":"v1","addr":"192.168.123.100:6805","nonce":430820835}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.100:6808","nonce":430820835},{"type":"v1","addr":"192.168.123.100:6809","nonce":430820835}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.100:6806","nonce":430820835},{"type":"v1","addr":"192.168.123.100:6807","nonce":430820835}]},"public_addr":"192.168.123.100:6803/430820835","cluster_addr":"192.168.123.100:6805/430820835","heartbeat_back_addr":"192.168.123.100:6809/430820835","heartbeat_front_addr":"192.168.123.100:6807/430820835","state":["exists","up"]},{"osd":1,"uuid":"62e51a83-b44b-465f-8f6e-e14cd4837af5","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":13,"up_thru":20,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.107:6800","nonce":2145894062},{"type":"v1","addr":"192.168.123.107:6801","nonce":2145894062}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.107:6802","nonce":2145894062},{"type":"v1","addr":"192.168.123.107:6803","nonce":2145894062}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.107:6806","nonce":2145894062},{"type":"v1","addr":"192.168.123.107:6807","nonce":2145894062}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.107:6804","nonce":2145894062},{"type":"v1","addr":"192.168.123.107:6805","nonce":2145894062}]},"public_addr":"192.168.123.107:6801/2145894062","cluster_addr":"192.168.123.107:6803/2145894062","heartbeat_back_addr":"192.168.123.107:6807/2145894062","heartbeat_front_addr":"192.168.123.107:6805/2145894062","state":["exists","up"]},{"osd":2,"uuid":"84b6e04b-cad7-4941-bbb1-4ca53f9ed622","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":18,"up_thru":0,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.108:6800","nonce":941417901},{"type":"v1","addr":"192.168.123.108:6801","nonce":941417901}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.108:6802","nonce":941417901},{"type":"v1","addr":"192.168.123.108:6803","nonce":941417901}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.108:6806","nonce":941417901},{"type":"v1","addr":"192.168.123.108:6807","nonce":941417901}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.108:6804","nonce":941417901},{"type":"v1","addr":"192.168.123.108:6805","nonce":941417901}]},"public_addr":"192.168.123.108:6801/941417901","cluster_addr":"192.168.123.108:6803/941417901","heartbeat_back_addr":"192.168.123.108:6807/941417901","heartbeat_front_addr":"192.168.123.108:6805/941417901","state":["exists","up"]}],"osd_xinfo":[{"osd":0,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540701547738038271,"old_weight":0,"last_purged_snaps_scrub":"2026-03-10T13:44:02.624281+0000","dead_epoch":0},{"osd":1,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540701547738038271,"old_weight":0,"last_purged_snaps_scrub":"2026-03-10T13:44:36.117101+0000","dead_epoch":0},{"osd":2,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540701547738038271,"old_weight":0,"last_purged_snaps_scrub":"2026-03-10T13:45:06.989224+0000","dead_epoch":0}],"pg_upmap":[],"pg_upmap_items":[],"pg_upmap_primaries":[],"pg_temp":[],"primary_temp":[],"blocklist":{"192.168.123.100:0/594940130":"2026-03-11T13:42:30.190724+0000","192.168.123.100:0/3876959113":"2026-03-11T13:42:30.190724+0000","192.168.123.100:6801/608270361":"2026-03-11T13:42:30.190724+0000","192.168.123.100:6800/608270361":"2026-03-11T13:42:30.190724+0000","192.168.123.100:0/2625574641":"2026-03-11T13:42:20.129969+0000","192.168.123.100:0/2879841732":"2026-03-11T13:42:20.129969+0000","192.168.123.100:6801/3205452410":"2026-03-11T13:42:20.129969+0000","192.168.123.100:0/1312129672":"2026-03-11T13:42:30.190724+0000","192.168.123.100:6800/3205452410":"2026-03-11T13:42:20.129969+0000","192.168.123.100:0/1053246253":"2026-03-11T13:42:20.129969+0000"},"range_blocklist":{},"erasure_code_profiles":{"default":{"crush-failure-domain":"osd","k":"2","m":"1","plugin":"jerasure","technique":"reed_sol_van"}},"removed_snaps_queue":[],"new_removed_snaps":[],"new_purged_snaps":[],"crush_node_flags":{},"device_class_flags":{},"stretch_mode":{"stretch_mode_enabled":false,"stretch_bucket_count":0,"degraded_stretch_mode":0,"recovering_stretch_mode":0,"stretch_mode_bucket":0}} 2026-03-10T13:45:18.652 INFO:tasks.cephadm.ceph_manager.ceph:[{'pool': 1, 'pool_name': '.mgr', 'create_time': '2026-03-10T13:45:10.246821+0000', 'flags': 1, 'flags_names': 'hashpspool', 'type': 1, 'size': 3, 'min_size': 2, 'crush_rule': 0, 'peering_crush_bucket_count': 0, 'peering_crush_bucket_target': 0, 'peering_crush_bucket_barrier': 0, 'peering_crush_bucket_mandatory_member': 2147483647, 'is_stretch_pool': False, 'object_hash': 2, 'pg_autoscale_mode': 'off', 'pg_num': 1, 'pg_placement_num': 1, 'pg_placement_num_target': 1, 'pg_num_target': 1, 'pg_num_pending': 1, 'last_pg_merge_meta': {'source_pgid': '0.0', 'ready_epoch': 0, 'last_epoch_started': 0, 'last_epoch_clean': 0, 'source_version': "0'0", 'target_version': "0'0"}, 'last_change': '22', 'last_force_op_resend': '0', 'last_force_op_resend_prenautilus': '0', 'last_force_op_resend_preluminous': '0', 'auid': 0, 'snap_mode': 'selfmanaged', 'snap_seq': 0, 'snap_epoch': 0, 'pool_snaps': [], 'removed_snaps': '[]', 'quota_max_bytes': 0, 'quota_max_objects': 0, 'tiers': [], 'tier_of': -1, 'read_tier': -1, 'write_tier': -1, 'cache_mode': 'none', 'target_max_bytes': 0, 'target_max_objects': 0, 'cache_target_dirty_ratio_micro': 400000, 'cache_target_dirty_high_ratio_micro': 600000, 'cache_target_full_ratio_micro': 800000, 'cache_min_flush_age': 0, 'cache_min_evict_age': 0, 'erasure_code_profile': '', 'hit_set_params': {'type': 'none'}, 'hit_set_period': 0, 'hit_set_count': 0, 'use_gmt_hitset': True, 'min_read_recency_for_promote': 0, 'min_write_recency_for_promote': 0, 'hit_set_grade_decay_rate': 0, 'hit_set_search_last_n': 0, 'grade_table': [], 'stripe_width': 0, 'expected_num_objects': 0, 'fast_read': False, 'options': {'pg_num_max': 32, 'pg_num_min': 1}, 'application_metadata': {'mgr': {}}, 'read_balance': {'score_type': 'Fair distribution', 'score_acting': 3, 'score_stable': 3, 'optimal_score': 1, 'raw_score_acting': 3, 'raw_score_stable': 3, 'primary_affinity_weighted': 1, 'average_primary_affinity': 1, 'average_primary_affinity_weighted': 1}}] 2026-03-10T13:45:18.652 DEBUG:teuthology.orchestra.run.vm00:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid c9620084-1c86-11f1-bcc5-e3fb709eab0a -- ceph osd pool get .mgr pg_num 2026-03-10T13:45:19.337 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:45:19 vm08 bash[23387]: audit 2026-03-10T13:45:18.599425+0000 mon.b (mon.2) 10 : audit [DBG] from='client.? 192.168.123.100:0/3327234192' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-10T13:45:19.337 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:45:19 vm08 bash[23387]: audit 2026-03-10T13:45:18.599425+0000 mon.b (mon.2) 10 : audit [DBG] from='client.? 192.168.123.100:0/3327234192' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-10T13:45:19.359 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:45:19 vm00 bash[20748]: audit 2026-03-10T13:45:18.599425+0000 mon.b (mon.2) 10 : audit [DBG] from='client.? 192.168.123.100:0/3327234192' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-10T13:45:19.359 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:45:19 vm00 bash[20748]: audit 2026-03-10T13:45:18.599425+0000 mon.b (mon.2) 10 : audit [DBG] from='client.? 192.168.123.100:0/3327234192' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-10T13:45:19.499 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:45:19 vm07 bash[23044]: audit 2026-03-10T13:45:18.599425+0000 mon.b (mon.2) 10 : audit [DBG] from='client.? 192.168.123.100:0/3327234192' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-10T13:45:19.499 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:45:19 vm07 bash[23044]: audit 2026-03-10T13:45:18.599425+0000 mon.b (mon.2) 10 : audit [DBG] from='client.? 192.168.123.100:0/3327234192' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-10T13:45:20.337 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:45:20 vm08 bash[23387]: cluster 2026-03-10T13:45:18.217184+0000 mgr.a (mgr.14150) 140 : cluster [DBG] pgmap v96: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:45:20.337 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:45:20 vm08 bash[23387]: cluster 2026-03-10T13:45:18.217184+0000 mgr.a (mgr.14150) 140 : cluster [DBG] pgmap v96: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:45:20.466 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:45:20 vm00 bash[20748]: cluster 2026-03-10T13:45:18.217184+0000 mgr.a (mgr.14150) 140 : cluster [DBG] pgmap v96: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:45:20.466 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:45:20 vm00 bash[20748]: cluster 2026-03-10T13:45:18.217184+0000 mgr.a (mgr.14150) 140 : cluster [DBG] pgmap v96: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:45:20.499 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:45:20 vm07 bash[23044]: cluster 2026-03-10T13:45:18.217184+0000 mgr.a (mgr.14150) 140 : cluster [DBG] pgmap v96: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:45:20.499 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:45:20 vm07 bash[23044]: cluster 2026-03-10T13:45:18.217184+0000 mgr.a (mgr.14150) 140 : cluster [DBG] pgmap v96: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:45:21.466 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:45:21 vm00 bash[20748]: cluster 2026-03-10T13:45:20.217434+0000 mgr.a (mgr.14150) 141 : cluster [DBG] pgmap v97: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:45:21.466 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:45:21 vm00 bash[20748]: cluster 2026-03-10T13:45:20.217434+0000 mgr.a (mgr.14150) 141 : cluster [DBG] pgmap v97: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:45:21.499 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:45:21 vm07 bash[23044]: cluster 2026-03-10T13:45:20.217434+0000 mgr.a (mgr.14150) 141 : cluster [DBG] pgmap v97: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:45:21.499 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:45:21 vm07 bash[23044]: cluster 2026-03-10T13:45:20.217434+0000 mgr.a (mgr.14150) 141 : cluster [DBG] pgmap v97: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:45:21.587 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:45:21 vm08 bash[23387]: cluster 2026-03-10T13:45:20.217434+0000 mgr.a (mgr.14150) 141 : cluster [DBG] pgmap v97: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:45:21.587 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:45:21 vm08 bash[23387]: cluster 2026-03-10T13:45:20.217434+0000 mgr.a (mgr.14150) 141 : cluster [DBG] pgmap v97: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:45:22.359 INFO:teuthology.orchestra.run.vm00.stderr:Inferring config /var/lib/ceph/c9620084-1c86-11f1-bcc5-e3fb709eab0a/mon.a/config 2026-03-10T13:45:22.586 INFO:teuthology.orchestra.run.vm00.stdout:pg_num: 1 2026-03-10T13:45:22.632 INFO:tasks.cephadm:Setting up client nodes... 2026-03-10T13:45:22.632 INFO:tasks.ceph:Waiting until ceph daemons up and pgs clean... 2026-03-10T13:45:22.632 INFO:tasks.cephadm.ceph_manager.ceph:waiting for mgr available 2026-03-10T13:45:22.632 DEBUG:teuthology.orchestra.run.vm00:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid c9620084-1c86-11f1-bcc5-e3fb709eab0a -- ceph mgr dump --format=json 2026-03-10T13:45:23.587 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:45:23 vm08 bash[23387]: cluster 2026-03-10T13:45:22.217703+0000 mgr.a (mgr.14150) 142 : cluster [DBG] pgmap v98: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:45:23.587 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:45:23 vm08 bash[23387]: cluster 2026-03-10T13:45:22.217703+0000 mgr.a (mgr.14150) 142 : cluster [DBG] pgmap v98: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:45:23.587 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:45:23 vm08 bash[23387]: audit 2026-03-10T13:45:22.585503+0000 mon.b (mon.2) 11 : audit [DBG] from='client.? 192.168.123.100:0/1466539720' entity='client.admin' cmd=[{"prefix": "osd pool get", "pool": ".mgr", "var": "pg_num"}]: dispatch 2026-03-10T13:45:23.587 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:45:23 vm08 bash[23387]: audit 2026-03-10T13:45:22.585503+0000 mon.b (mon.2) 11 : audit [DBG] from='client.? 192.168.123.100:0/1466539720' entity='client.admin' cmd=[{"prefix": "osd pool get", "pool": ".mgr", "var": "pg_num"}]: dispatch 2026-03-10T13:45:23.716 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:45:23 vm00 bash[20748]: cluster 2026-03-10T13:45:22.217703+0000 mgr.a (mgr.14150) 142 : cluster [DBG] pgmap v98: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:45:23.716 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:45:23 vm00 bash[20748]: cluster 2026-03-10T13:45:22.217703+0000 mgr.a (mgr.14150) 142 : cluster [DBG] pgmap v98: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:45:23.716 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:45:23 vm00 bash[20748]: audit 2026-03-10T13:45:22.585503+0000 mon.b (mon.2) 11 : audit [DBG] from='client.? 192.168.123.100:0/1466539720' entity='client.admin' cmd=[{"prefix": "osd pool get", "pool": ".mgr", "var": "pg_num"}]: dispatch 2026-03-10T13:45:23.716 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:45:23 vm00 bash[20748]: audit 2026-03-10T13:45:22.585503+0000 mon.b (mon.2) 11 : audit [DBG] from='client.? 192.168.123.100:0/1466539720' entity='client.admin' cmd=[{"prefix": "osd pool get", "pool": ".mgr", "var": "pg_num"}]: dispatch 2026-03-10T13:45:23.749 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:45:23 vm07 bash[23044]: cluster 2026-03-10T13:45:22.217703+0000 mgr.a (mgr.14150) 142 : cluster [DBG] pgmap v98: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:45:23.749 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:45:23 vm07 bash[23044]: cluster 2026-03-10T13:45:22.217703+0000 mgr.a (mgr.14150) 142 : cluster [DBG] pgmap v98: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:45:23.749 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:45:23 vm07 bash[23044]: audit 2026-03-10T13:45:22.585503+0000 mon.b (mon.2) 11 : audit [DBG] from='client.? 192.168.123.100:0/1466539720' entity='client.admin' cmd=[{"prefix": "osd pool get", "pool": ".mgr", "var": "pg_num"}]: dispatch 2026-03-10T13:45:23.749 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:45:23 vm07 bash[23044]: audit 2026-03-10T13:45:22.585503+0000 mon.b (mon.2) 11 : audit [DBG] from='client.? 192.168.123.100:0/1466539720' entity='client.admin' cmd=[{"prefix": "osd pool get", "pool": ".mgr", "var": "pg_num"}]: dispatch 2026-03-10T13:45:25.587 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:45:25 vm08 bash[23387]: cluster 2026-03-10T13:45:24.217920+0000 mgr.a (mgr.14150) 143 : cluster [DBG] pgmap v99: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:45:25.587 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:45:25 vm08 bash[23387]: cluster 2026-03-10T13:45:24.217920+0000 mgr.a (mgr.14150) 143 : cluster [DBG] pgmap v99: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:45:25.716 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:45:25 vm00 bash[20748]: cluster 2026-03-10T13:45:24.217920+0000 mgr.a (mgr.14150) 143 : cluster [DBG] pgmap v99: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:45:25.716 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:45:25 vm00 bash[20748]: cluster 2026-03-10T13:45:24.217920+0000 mgr.a (mgr.14150) 143 : cluster [DBG] pgmap v99: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:45:25.749 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:45:25 vm07 bash[23044]: cluster 2026-03-10T13:45:24.217920+0000 mgr.a (mgr.14150) 143 : cluster [DBG] pgmap v99: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:45:25.749 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:45:25 vm07 bash[23044]: cluster 2026-03-10T13:45:24.217920+0000 mgr.a (mgr.14150) 143 : cluster [DBG] pgmap v99: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:45:26.369 INFO:teuthology.orchestra.run.vm00.stderr:Inferring config /var/lib/ceph/c9620084-1c86-11f1-bcc5-e3fb709eab0a/mon.a/config 2026-03-10T13:45:26.635 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-10T13:45:26.684 INFO:teuthology.orchestra.run.vm00.stdout:{"epoch":14,"flags":0,"active_gid":14150,"active_name":"a","active_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.100:6800","nonce":3964800760},{"type":"v1","addr":"192.168.123.100:6801","nonce":3964800760}]},"active_addr":"192.168.123.100:6801/3964800760","active_change":"2026-03-10T13:42:30.190834+0000","active_mgr_features":4540701547738038271,"available":true,"standbys":[{"gid":24110,"name":"b","mgr_features":4540701547738038271,"available_modules":[{"name":"alerts","can_run":true,"error_string":"","module_options":{"interval":{"name":"interval","type":"secs","level":"advanced","flags":1,"default_value":"60","min":"","max":"","enum_allowed":[],"desc":"How frequently to reexamine health status","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"smtp_destination":{"name":"smtp_destination","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"Email address to send alerts to","long_desc":"","tags":[],"see_also":[]},"smtp_from_name":{"name":"smtp_from_name","type":"str","level":"advanced","flags":1,"default_value":"Ceph","min":"","max":"","enum_allowed":[],"desc":"Email From: name","long_desc":"","tags":[],"see_also":[]},"smtp_host":{"name":"smtp_host","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"SMTP server","long_desc":"","tags":[],"see_also":[]},"smtp_password":{"name":"smtp_password","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"Password to authenticate with","long_desc":"","tags":[],"see_also":[]},"smtp_port":{"name":"smtp_port","type":"int","level":"advanced","flags":1,"default_value":"465","min":"","max":"","enum_allowed":[],"desc":"SMTP port","long_desc":"","tags":[],"see_also":[]},"smtp_sender":{"name":"smtp_sender","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"SMTP envelope sender","long_desc":"","tags":[],"see_also":[]},"smtp_ssl":{"name":"smtp_ssl","type":"bool","level":"advanced","flags":1,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Use SSL to connect to SMTP server","long_desc":"","tags":[],"see_also":[]},"smtp_user":{"name":"smtp_user","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"User to authenticate as","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"balancer","can_run":true,"error_string":"","module_options":{"active":{"name":"active","type":"bool","level":"advanced","flags":1,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"automatically balance PGs across cluster","long_desc":"","tags":[],"see_also":[]},"begin_time":{"name":"begin_time","type":"str","level":"advanced","flags":1,"default_value":"0000","min":"","max":"","enum_allowed":[],"desc":"beginning time of day to automatically balance","long_desc":"This is a time of day in the format HHMM.","tags":[],"see_also":[]},"begin_weekday":{"name":"begin_weekday","type":"uint","level":"advanced","flags":1,"default_value":"0","min":"0","max":"6","enum_allowed":[],"desc":"Restrict automatic balancing to this day of the week or later","long_desc":"0 = Sunday, 1 = Monday, etc.","tags":[],"see_also":[]},"crush_compat_max_iterations":{"name":"crush_compat_max_iterations","type":"uint","level":"advanced","flags":1,"default_value":"25","min":"1","max":"250","enum_allowed":[],"desc":"maximum number of iterations to attempt optimization","long_desc":"","tags":[],"see_also":[]},"crush_compat_metrics":{"name":"crush_compat_metrics","type":"str","level":"advanced","flags":1,"default_value":"pgs,objects,bytes","min":"","max":"","enum_allowed":[],"desc":"metrics with which to calculate OSD utilization","long_desc":"Value is a list of one or more of \"pgs\", \"objects\", or \"bytes\", and indicates which metrics to use to balance utilization.","tags":[],"see_also":[]},"crush_compat_step":{"name":"crush_compat_step","type":"float","level":"advanced","flags":1,"default_value":"0.5","min":"0.001","max":"0.999","enum_allowed":[],"desc":"aggressiveness of optimization","long_desc":".99 is very aggressive, .01 is less aggressive","tags":[],"see_also":[]},"end_time":{"name":"end_time","type":"str","level":"advanced","flags":1,"default_value":"2359","min":"","max":"","enum_allowed":[],"desc":"ending time of day to automatically balance","long_desc":"This is a time of day in the format HHMM.","tags":[],"see_also":[]},"end_weekday":{"name":"end_weekday","type":"uint","level":"advanced","flags":1,"default_value":"0","min":"0","max":"6","enum_allowed":[],"desc":"Restrict automatic balancing to days of the week earlier than this","long_desc":"0 = Sunday, 1 = Monday, etc.","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"min_score":{"name":"min_score","type":"float","level":"advanced","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"minimum score, below which no optimization is attempted","long_desc":"","tags":[],"see_also":[]},"mode":{"name":"mode","type":"str","level":"advanced","flags":1,"default_value":"upmap","min":"","max":"","enum_allowed":["crush-compat","none","read","upmap","upmap-read"],"desc":"Balancer mode","long_desc":"","tags":[],"see_also":[]},"pool_ids":{"name":"pool_ids","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"pools which the automatic balancing will be limited to","long_desc":"","tags":[],"see_also":[]},"sleep_interval":{"name":"sleep_interval","type":"secs","level":"advanced","flags":1,"default_value":"60","min":"","max":"","enum_allowed":[],"desc":"how frequently to wake up and attempt optimization","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"update_pg_upmap_activity":{"name":"update_pg_upmap_activity","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Updates pg_upmap activity stats to be used in `balancer status detail`","long_desc":"","tags":[],"see_also":[]},"upmap_max_deviation":{"name":"upmap_max_deviation","type":"int","level":"advanced","flags":1,"default_value":"5","min":"1","max":"","enum_allowed":[],"desc":"deviation below which no optimization is attempted","long_desc":"If the number of PGs are within this count then no optimization is attempted","tags":[],"see_also":[]},"upmap_max_optimizations":{"name":"upmap_max_optimizations","type":"uint","level":"advanced","flags":1,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"maximum upmap optimizations to make per attempt","long_desc":"","tags":[],"see_also":[]}}},{"name":"cephadm","can_run":true,"error_string":"","module_options":{"agent_down_multiplier":{"name":"agent_down_multiplier","type":"float","level":"advanced","flags":0,"default_value":"3.0","min":"","max":"","enum_allowed":[],"desc":"Multiplied by agent refresh rate to calculate how long agent must not report before being marked down","long_desc":"","tags":[],"see_also":[]},"agent_refresh_rate":{"name":"agent_refresh_rate","type":"secs","level":"advanced","flags":0,"default_value":"20","min":"","max":"","enum_allowed":[],"desc":"How often agent on each host will try to gather and send metadata","long_desc":"","tags":[],"see_also":[]},"agent_starting_port":{"name":"agent_starting_port","type":"int","level":"advanced","flags":0,"default_value":"4721","min":"","max":"","enum_allowed":[],"desc":"First port agent will try to bind to (will also try up to next 1000 subsequent ports if blocked)","long_desc":"","tags":[],"see_also":[]},"allow_ptrace":{"name":"allow_ptrace","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"allow SYS_PTRACE capability on ceph containers","long_desc":"The SYS_PTRACE capability is needed to attach to a process with gdb or strace. Enabling this options can allow debugging daemons that encounter problems at runtime.","tags":[],"see_also":[]},"autotune_interval":{"name":"autotune_interval","type":"secs","level":"advanced","flags":0,"default_value":"600","min":"","max":"","enum_allowed":[],"desc":"how frequently to autotune daemon memory","long_desc":"","tags":[],"see_also":[]},"autotune_memory_target_ratio":{"name":"autotune_memory_target_ratio","type":"float","level":"advanced","flags":0,"default_value":"0.7","min":"","max":"","enum_allowed":[],"desc":"ratio of total system memory to divide amongst autotuned daemons","long_desc":"","tags":[],"see_also":[]},"cephadm_log_destination":{"name":"cephadm_log_destination","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":["file","file,syslog","syslog"],"desc":"Destination for cephadm command's persistent logging","long_desc":"","tags":[],"see_also":[]},"cgroups_split":{"name":"cgroups_split","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Pass --cgroups=split when cephadm creates containers (currently podman only)","long_desc":"","tags":[],"see_also":[]},"config_checks_enabled":{"name":"config_checks_enabled","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Enable or disable the cephadm configuration analysis","long_desc":"","tags":[],"see_also":[]},"config_dashboard":{"name":"config_dashboard","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"manage configs like API endpoints in Dashboard.","long_desc":"","tags":[],"see_also":[]},"container_image_alertmanager":{"name":"container_image_alertmanager","type":"str","level":"advanced","flags":0,"default_value":"quay.io/prometheus/alertmanager:v0.25.0","min":"","max":"","enum_allowed":[],"desc":"Prometheus container image","long_desc":"","tags":[],"see_also":[]},"container_image_base":{"name":"container_image_base","type":"str","level":"advanced","flags":1,"default_value":"quay.io/ceph/ceph","min":"","max":"","enum_allowed":[],"desc":"Container image name, without the tag","long_desc":"","tags":[],"see_also":[]},"container_image_elasticsearch":{"name":"container_image_elasticsearch","type":"str","level":"advanced","flags":0,"default_value":"quay.io/omrizeneva/elasticsearch:6.8.23","min":"","max":"","enum_allowed":[],"desc":"elasticsearch container image","long_desc":"","tags":[],"see_also":[]},"container_image_grafana":{"name":"container_image_grafana","type":"str","level":"advanced","flags":0,"default_value":"quay.io/ceph/grafana:10.4.0","min":"","max":"","enum_allowed":[],"desc":"Prometheus container image","long_desc":"","tags":[],"see_also":[]},"container_image_haproxy":{"name":"container_image_haproxy","type":"str","level":"advanced","flags":0,"default_value":"quay.io/ceph/haproxy:2.3","min":"","max":"","enum_allowed":[],"desc":"HAproxy container image","long_desc":"","tags":[],"see_also":[]},"container_image_jaeger_agent":{"name":"container_image_jaeger_agent","type":"str","level":"advanced","flags":0,"default_value":"quay.io/jaegertracing/jaeger-agent:1.29","min":"","max":"","enum_allowed":[],"desc":"Jaeger agent container image","long_desc":"","tags":[],"see_also":[]},"container_image_jaeger_collector":{"name":"container_image_jaeger_collector","type":"str","level":"advanced","flags":0,"default_value":"quay.io/jaegertracing/jaeger-collector:1.29","min":"","max":"","enum_allowed":[],"desc":"Jaeger collector container image","long_desc":"","tags":[],"see_also":[]},"container_image_jaeger_query":{"name":"container_image_jaeger_query","type":"str","level":"advanced","flags":0,"default_value":"quay.io/jaegertracing/jaeger-query:1.29","min":"","max":"","enum_allowed":[],"desc":"Jaeger query container image","long_desc":"","tags":[],"see_also":[]},"container_image_keepalived":{"name":"container_image_keepalived","type":"str","level":"advanced","flags":0,"default_value":"quay.io/ceph/keepalived:2.2.4","min":"","max":"","enum_allowed":[],"desc":"Keepalived container image","long_desc":"","tags":[],"see_also":[]},"container_image_loki":{"name":"container_image_loki","type":"str","level":"advanced","flags":0,"default_value":"quay.io/ceph/loki:3.0.0","min":"","max":"","enum_allowed":[],"desc":"Loki container image","long_desc":"","tags":[],"see_also":[]},"container_image_node_exporter":{"name":"container_image_node_exporter","type":"str","level":"advanced","flags":0,"default_value":"quay.io/prometheus/node-exporter:v1.7.0","min":"","max":"","enum_allowed":[],"desc":"Prometheus container image","long_desc":"","tags":[],"see_also":[]},"container_image_nvmeof":{"name":"container_image_nvmeof","type":"str","level":"advanced","flags":0,"default_value":"quay.io/ceph/nvmeof:1.2.5","min":"","max":"","enum_allowed":[],"desc":"Nvme-of container image","long_desc":"","tags":[],"see_also":[]},"container_image_prometheus":{"name":"container_image_prometheus","type":"str","level":"advanced","flags":0,"default_value":"quay.io/prometheus/prometheus:v2.51.0","min":"","max":"","enum_allowed":[],"desc":"Prometheus container image","long_desc":"","tags":[],"see_also":[]},"container_image_promtail":{"name":"container_image_promtail","type":"str","level":"advanced","flags":0,"default_value":"quay.io/ceph/promtail:3.0.0","min":"","max":"","enum_allowed":[],"desc":"Promtail container image","long_desc":"","tags":[],"see_also":[]},"container_image_samba":{"name":"container_image_samba","type":"str","level":"advanced","flags":0,"default_value":"quay.io/samba.org/samba-server:devbuilds-centos-amd64","min":"","max":"","enum_allowed":[],"desc":"Samba/SMB container image","long_desc":"","tags":[],"see_also":[]},"container_image_snmp_gateway":{"name":"container_image_snmp_gateway","type":"str","level":"advanced","flags":0,"default_value":"quay.io/ceph/snmp-notifier:v1.2.1","min":"","max":"","enum_allowed":[],"desc":"SNMP Gateway container image","long_desc":"","tags":[],"see_also":[]},"container_init":{"name":"container_init","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Run podman/docker with `--init`","long_desc":"","tags":[],"see_also":[]},"daemon_cache_timeout":{"name":"daemon_cache_timeout","type":"secs","level":"advanced","flags":0,"default_value":"600","min":"","max":"","enum_allowed":[],"desc":"seconds to cache service (daemon) inventory","long_desc":"","tags":[],"see_also":[]},"default_cephadm_command_timeout":{"name":"default_cephadm_command_timeout","type":"int","level":"advanced","flags":0,"default_value":"900","min":"","max":"","enum_allowed":[],"desc":"Default timeout applied to cephadm commands run directly on the host (in seconds)","long_desc":"","tags":[],"see_also":[]},"default_registry":{"name":"default_registry","type":"str","level":"advanced","flags":0,"default_value":"quay.io","min":"","max":"","enum_allowed":[],"desc":"Search-registry to which we should normalize unqualified image names. This is not the default registry","long_desc":"","tags":[],"see_also":[]},"device_cache_timeout":{"name":"device_cache_timeout","type":"secs","level":"advanced","flags":0,"default_value":"1800","min":"","max":"","enum_allowed":[],"desc":"seconds to cache device inventory","long_desc":"","tags":[],"see_also":[]},"device_enhanced_scan":{"name":"device_enhanced_scan","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Use libstoragemgmt during device scans","long_desc":"","tags":[],"see_also":[]},"facts_cache_timeout":{"name":"facts_cache_timeout","type":"secs","level":"advanced","flags":0,"default_value":"60","min":"","max":"","enum_allowed":[],"desc":"seconds to cache host facts data","long_desc":"","tags":[],"see_also":[]},"grafana_dashboards_path":{"name":"grafana_dashboards_path","type":"str","level":"advanced","flags":0,"default_value":"/etc/grafana/dashboards/ceph-dashboard/","min":"","max":"","enum_allowed":[],"desc":"location of dashboards to include in grafana deployments","long_desc":"","tags":[],"see_also":[]},"host_check_interval":{"name":"host_check_interval","type":"secs","level":"advanced","flags":0,"default_value":"600","min":"","max":"","enum_allowed":[],"desc":"how frequently to perform a host check","long_desc":"","tags":[],"see_also":[]},"hw_monitoring":{"name":"hw_monitoring","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Deploy hw monitoring daemon on every host.","long_desc":"","tags":[],"see_also":[]},"inventory_list_all":{"name":"inventory_list_all","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Whether ceph-volume inventory should report more devices (mostly mappers (LVs / mpaths), partitions...)","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_refresh_metadata":{"name":"log_refresh_metadata","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Log all refresh metadata. Includes daemon, device, and host info collected regularly. Only has effect if logging at debug level","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"log to the \"cephadm\" cluster log channel\"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"manage_etc_ceph_ceph_conf":{"name":"manage_etc_ceph_ceph_conf","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Manage and own /etc/ceph/ceph.conf on the hosts.","long_desc":"","tags":[],"see_also":[]},"manage_etc_ceph_ceph_conf_hosts":{"name":"manage_etc_ceph_ceph_conf_hosts","type":"str","level":"advanced","flags":0,"default_value":"*","min":"","max":"","enum_allowed":[],"desc":"PlacementSpec describing on which hosts to manage /etc/ceph/ceph.conf","long_desc":"","tags":[],"see_also":[]},"max_count_per_host":{"name":"max_count_per_host","type":"int","level":"advanced","flags":0,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"max number of daemons per service per host","long_desc":"","tags":[],"see_also":[]},"max_osd_draining_count":{"name":"max_osd_draining_count","type":"int","level":"advanced","flags":0,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"max number of osds that will be drained simultaneously when osds are removed","long_desc":"","tags":[],"see_also":[]},"migration_current":{"name":"migration_current","type":"int","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"internal - do not modify","long_desc":"","tags":[],"see_also":[]},"mode":{"name":"mode","type":"str","level":"advanced","flags":0,"default_value":"root","min":"","max":"","enum_allowed":["cephadm-package","root"],"desc":"mode for remote execution of cephadm","long_desc":"","tags":[],"see_also":[]},"oob_default_addr":{"name":"oob_default_addr","type":"str","level":"advanced","flags":0,"default_value":"169.254.1.1","min":"","max":"","enum_allowed":[],"desc":"Default address for RedFish API (oob management).","long_desc":"","tags":[],"see_also":[]},"prometheus_alerts_path":{"name":"prometheus_alerts_path","type":"str","level":"advanced","flags":0,"default_value":"/etc/prometheus/ceph/ceph_default_alerts.yml","min":"","max":"","enum_allowed":[],"desc":"location of alerts to include in prometheus deployments","long_desc":"","tags":[],"see_also":[]},"registry_insecure":{"name":"registry_insecure","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Registry is to be considered insecure (no TLS available). Only for development purposes.","long_desc":"","tags":[],"see_also":[]},"registry_password":{"name":"registry_password","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"Custom repository password. Only used for logging into a registry.","long_desc":"","tags":[],"see_also":[]},"registry_url":{"name":"registry_url","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"Registry url for login purposes. This is not the default registry","long_desc":"","tags":[],"see_also":[]},"registry_username":{"name":"registry_username","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"Custom repository username. Only used for logging into a registry.","long_desc":"","tags":[],"see_also":[]},"secure_monitoring_stack":{"name":"secure_monitoring_stack","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Enable TLS security for all the monitoring stack daemons","long_desc":"","tags":[],"see_also":[]},"service_discovery_port":{"name":"service_discovery_port","type":"int","level":"advanced","flags":0,"default_value":"8765","min":"","max":"","enum_allowed":[],"desc":"cephadm service discovery port","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ssh_config_file":{"name":"ssh_config_file","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"customized SSH config file to connect to managed hosts","long_desc":"","tags":[],"see_also":[]},"ssh_keepalive_count_max":{"name":"ssh_keepalive_count_max","type":"int","level":"advanced","flags":0,"default_value":"3","min":"","max":"","enum_allowed":[],"desc":"How many times ssh connections can fail liveness checks before the host is marked offline","long_desc":"","tags":[],"see_also":[]},"ssh_keepalive_interval":{"name":"ssh_keepalive_interval","type":"int","level":"advanced","flags":0,"default_value":"7","min":"","max":"","enum_allowed":[],"desc":"How often ssh connections are checked for liveness","long_desc":"","tags":[],"see_also":[]},"use_agent":{"name":"use_agent","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Use cephadm agent on each host to gather and send metadata","long_desc":"","tags":[],"see_also":[]},"use_repo_digest":{"name":"use_repo_digest","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Automatically convert image tags to image digest. Make sure all daemons use the same image","long_desc":"","tags":[],"see_also":[]},"warn_on_failed_host_check":{"name":"warn_on_failed_host_check","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"raise a health warning if the host check fails","long_desc":"","tags":[],"see_also":[]},"warn_on_stray_daemons":{"name":"warn_on_stray_daemons","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"raise a health warning if daemons are detected that are not managed by cephadm","long_desc":"","tags":[],"see_also":[]},"warn_on_stray_hosts":{"name":"warn_on_stray_hosts","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"raise a health warning if daemons are detected on a host that is not managed by cephadm","long_desc":"","tags":[],"see_also":[]}}},{"name":"crash","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"retain_interval":{"name":"retain_interval","type":"secs","level":"advanced","flags":1,"default_value":"31536000","min":"","max":"","enum_allowed":[],"desc":"how long to retain crashes before pruning them","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"warn_recent_interval":{"name":"warn_recent_interval","type":"secs","level":"advanced","flags":1,"default_value":"1209600","min":"","max":"","enum_allowed":[],"desc":"time interval in which to warn about recent crashes","long_desc":"","tags":[],"see_also":[]}}},{"name":"dashboard","can_run":true,"error_string":"","module_options":{"ACCOUNT_LOCKOUT_ATTEMPTS":{"name":"ACCOUNT_LOCKOUT_ATTEMPTS","type":"int","level":"advanced","flags":0,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ALERTMANAGER_API_HOST":{"name":"ALERTMANAGER_API_HOST","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ALERTMANAGER_API_SSL_VERIFY":{"name":"ALERTMANAGER_API_SSL_VERIFY","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"AUDIT_API_ENABLED":{"name":"AUDIT_API_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"AUDIT_API_LOG_PAYLOAD":{"name":"AUDIT_API_LOG_PAYLOAD","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ENABLE_BROWSABLE_API":{"name":"ENABLE_BROWSABLE_API","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_CEPHFS":{"name":"FEATURE_TOGGLE_CEPHFS","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_DASHBOARD":{"name":"FEATURE_TOGGLE_DASHBOARD","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_ISCSI":{"name":"FEATURE_TOGGLE_ISCSI","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_MIRRORING":{"name":"FEATURE_TOGGLE_MIRRORING","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_NFS":{"name":"FEATURE_TOGGLE_NFS","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_RBD":{"name":"FEATURE_TOGGLE_RBD","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_RGW":{"name":"FEATURE_TOGGLE_RGW","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GANESHA_CLUSTERS_RADOS_POOL_NAMESPACE":{"name":"GANESHA_CLUSTERS_RADOS_POOL_NAMESPACE","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GRAFANA_API_PASSWORD":{"name":"GRAFANA_API_PASSWORD","type":"str","level":"advanced","flags":0,"default_value":"admin","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GRAFANA_API_SSL_VERIFY":{"name":"GRAFANA_API_SSL_VERIFY","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GRAFANA_API_URL":{"name":"GRAFANA_API_URL","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GRAFANA_API_USERNAME":{"name":"GRAFANA_API_USERNAME","type":"str","level":"advanced","flags":0,"default_value":"admin","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GRAFANA_FRONTEND_API_URL":{"name":"GRAFANA_FRONTEND_API_URL","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GRAFANA_UPDATE_DASHBOARDS":{"name":"GRAFANA_UPDATE_DASHBOARDS","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ISCSI_API_SSL_VERIFICATION":{"name":"ISCSI_API_SSL_VERIFICATION","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ISSUE_TRACKER_API_KEY":{"name":"ISSUE_TRACKER_API_KEY","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PROMETHEUS_API_HOST":{"name":"PROMETHEUS_API_HOST","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PROMETHEUS_API_SSL_VERIFY":{"name":"PROMETHEUS_API_SSL_VERIFY","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_COMPLEXITY_ENABLED":{"name":"PWD_POLICY_CHECK_COMPLEXITY_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_EXCLUSION_LIST_ENABLED":{"name":"PWD_POLICY_CHECK_EXCLUSION_LIST_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_LENGTH_ENABLED":{"name":"PWD_POLICY_CHECK_LENGTH_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_OLDPWD_ENABLED":{"name":"PWD_POLICY_CHECK_OLDPWD_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_REPETITIVE_CHARS_ENABLED":{"name":"PWD_POLICY_CHECK_REPETITIVE_CHARS_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_SEQUENTIAL_CHARS_ENABLED":{"name":"PWD_POLICY_CHECK_SEQUENTIAL_CHARS_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_USERNAME_ENABLED":{"name":"PWD_POLICY_CHECK_USERNAME_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_ENABLED":{"name":"PWD_POLICY_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_EXCLUSION_LIST":{"name":"PWD_POLICY_EXCLUSION_LIST","type":"str","level":"advanced","flags":0,"default_value":"osd,host,dashboard,pool,block,nfs,ceph,monitors,gateway,logs,crush,maps","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_MIN_COMPLEXITY":{"name":"PWD_POLICY_MIN_COMPLEXITY","type":"int","level":"advanced","flags":0,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_MIN_LENGTH":{"name":"PWD_POLICY_MIN_LENGTH","type":"int","level":"advanced","flags":0,"default_value":"8","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"REST_REQUESTS_TIMEOUT":{"name":"REST_REQUESTS_TIMEOUT","type":"int","level":"advanced","flags":0,"default_value":"45","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"RGW_API_ACCESS_KEY":{"name":"RGW_API_ACCESS_KEY","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"RGW_API_ADMIN_RESOURCE":{"name":"RGW_API_ADMIN_RESOURCE","type":"str","level":"advanced","flags":0,"default_value":"admin","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"RGW_API_SECRET_KEY":{"name":"RGW_API_SECRET_KEY","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"RGW_API_SSL_VERIFY":{"name":"RGW_API_SSL_VERIFY","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"USER_PWD_EXPIRATION_SPAN":{"name":"USER_PWD_EXPIRATION_SPAN","type":"int","level":"advanced","flags":0,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"USER_PWD_EXPIRATION_WARNING_1":{"name":"USER_PWD_EXPIRATION_WARNING_1","type":"int","level":"advanced","flags":0,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"USER_PWD_EXPIRATION_WARNING_2":{"name":"USER_PWD_EXPIRATION_WARNING_2","type":"int","level":"advanced","flags":0,"default_value":"5","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"cross_origin_url":{"name":"cross_origin_url","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"crt_file":{"name":"crt_file","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"debug":{"name":"debug","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Enable/disable debug options","long_desc":"","tags":[],"see_also":[]},"jwt_token_ttl":{"name":"jwt_token_ttl","type":"int","level":"advanced","flags":0,"default_value":"28800","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"key_file":{"name":"key_file","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"motd":{"name":"motd","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"The message of the day","long_desc":"","tags":[],"see_also":[]},"redirect_resolve_ip_addr":{"name":"redirect_resolve_ip_addr","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"server_addr":{"name":"server_addr","type":"str","level":"advanced","flags":0,"default_value":"::","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"server_port":{"name":"server_port","type":"int","level":"advanced","flags":0,"default_value":"8080","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ssl":{"name":"ssl","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ssl_server_port":{"name":"ssl_server_port","type":"int","level":"advanced","flags":0,"default_value":"8443","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"standby_behaviour":{"name":"standby_behaviour","type":"str","level":"advanced","flags":0,"default_value":"redirect","min":"","max":"","enum_allowed":["error","redirect"],"desc":"","long_desc":"","tags":[],"see_also":[]},"standby_error_status_code":{"name":"standby_error_status_code","type":"int","level":"advanced","flags":0,"default_value":"500","min":"400","max":"599","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"url_prefix":{"name":"url_prefix","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"devicehealth","can_run":true,"error_string":"","module_options":{"enable_monitoring":{"name":"enable_monitoring","type":"bool","level":"advanced","flags":1,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"monitor device health metrics","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"mark_out_threshold":{"name":"mark_out_threshold","type":"secs","level":"advanced","flags":1,"default_value":"2419200","min":"","max":"","enum_allowed":[],"desc":"automatically mark OSD if it may fail before this long","long_desc":"","tags":[],"see_also":[]},"pool_name":{"name":"pool_name","type":"str","level":"advanced","flags":1,"default_value":"device_health_metrics","min":"","max":"","enum_allowed":[],"desc":"name of pool in which to store device health metrics","long_desc":"","tags":[],"see_also":[]},"retention_period":{"name":"retention_period","type":"secs","level":"advanced","flags":1,"default_value":"15552000","min":"","max":"","enum_allowed":[],"desc":"how long to retain device health metrics","long_desc":"","tags":[],"see_also":[]},"scrape_frequency":{"name":"scrape_frequency","type":"secs","level":"advanced","flags":1,"default_value":"86400","min":"","max":"","enum_allowed":[],"desc":"how frequently to scrape device health metrics","long_desc":"","tags":[],"see_also":[]},"self_heal":{"name":"self_heal","type":"bool","level":"advanced","flags":1,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"preemptively heal cluster around devices that may fail","long_desc":"","tags":[],"see_also":[]},"sleep_interval":{"name":"sleep_interval","type":"secs","level":"advanced","flags":1,"default_value":"600","min":"","max":"","enum_allowed":[],"desc":"how frequently to wake up and check device health","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"warn_threshold":{"name":"warn_threshold","type":"secs","level":"advanced","flags":1,"default_value":"7257600","min":"","max":"","enum_allowed":[],"desc":"raise health warning if OSD may fail before this long","long_desc":"","tags":[],"see_also":[]}}},{"name":"diskprediction_local","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"predict_interval":{"name":"predict_interval","type":"str","level":"advanced","flags":0,"default_value":"86400","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"predictor_model":{"name":"predictor_model","type":"str","level":"advanced","flags":0,"default_value":"prophetstor","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sleep_interval":{"name":"sleep_interval","type":"str","level":"advanced","flags":0,"default_value":"600","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"influx","can_run":false,"error_string":"influxdb python module not found","module_options":{"batch_size":{"name":"batch_size","type":"int","level":"advanced","flags":0,"default_value":"5000","min":"","max":"","enum_allowed":[],"desc":"How big batches of data points should be when sending to InfluxDB.","long_desc":"","tags":[],"see_also":[]},"database":{"name":"database","type":"str","level":"advanced","flags":0,"default_value":"ceph","min":"","max":"","enum_allowed":[],"desc":"InfluxDB database name. You will need to create this database and grant write privileges to the configured username or the username must have admin privileges to create it.","long_desc":"","tags":[],"see_also":[]},"hostname":{"name":"hostname","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"InfluxDB server hostname","long_desc":"","tags":[],"see_also":[]},"interval":{"name":"interval","type":"secs","level":"advanced","flags":0,"default_value":"30","min":"5","max":"","enum_allowed":[],"desc":"Time between reports to InfluxDB. Default 30 seconds.","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"password":{"name":"password","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"password of InfluxDB server user","long_desc":"","tags":[],"see_also":[]},"port":{"name":"port","type":"int","level":"advanced","flags":0,"default_value":"8086","min":"","max":"","enum_allowed":[],"desc":"InfluxDB server port","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ssl":{"name":"ssl","type":"str","level":"advanced","flags":0,"default_value":"false","min":"","max":"","enum_allowed":[],"desc":"Use https connection for InfluxDB server. Use \"true\" or \"false\".","long_desc":"","tags":[],"see_also":[]},"threads":{"name":"threads","type":"int","level":"advanced","flags":0,"default_value":"5","min":"1","max":"32","enum_allowed":[],"desc":"How many worker threads should be spawned for sending data to InfluxDB.","long_desc":"","tags":[],"see_also":[]},"username":{"name":"username","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"username of InfluxDB server user","long_desc":"","tags":[],"see_also":[]},"verify_ssl":{"name":"verify_ssl","type":"str","level":"advanced","flags":0,"default_value":"true","min":"","max":"","enum_allowed":[],"desc":"Verify https cert for InfluxDB server. Use \"true\" or \"false\".","long_desc":"","tags":[],"see_also":[]}}},{"name":"insights","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"iostat","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"k8sevents","can_run":true,"error_string":"","module_options":{"ceph_event_retention_days":{"name":"ceph_event_retention_days","type":"int","level":"advanced","flags":0,"default_value":"7","min":"","max":"","enum_allowed":[],"desc":"Days to hold ceph event information within local cache","long_desc":"","tags":[],"see_also":[]},"config_check_secs":{"name":"config_check_secs","type":"int","level":"advanced","flags":0,"default_value":"10","min":"10","max":"","enum_allowed":[],"desc":"interval (secs) to check for cluster configuration changes","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"localpool","can_run":true,"error_string":"","module_options":{"failure_domain":{"name":"failure_domain","type":"str","level":"advanced","flags":1,"default_value":"host","min":"","max":"","enum_allowed":[],"desc":"failure domain for any created local pool","long_desc":"what failure domain we should separate data replicas across.","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"min_size":{"name":"min_size","type":"int","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"default min_size for any created local pool","long_desc":"value to set min_size to (unchanged from Ceph's default if this option is not set)","tags":[],"see_also":[]},"num_rep":{"name":"num_rep","type":"int","level":"advanced","flags":1,"default_value":"3","min":"","max":"","enum_allowed":[],"desc":"default replica count for any created local pool","long_desc":"","tags":[],"see_also":[]},"pg_num":{"name":"pg_num","type":"int","level":"advanced","flags":1,"default_value":"128","min":"","max":"","enum_allowed":[],"desc":"default pg_num for any created local pool","long_desc":"","tags":[],"see_also":[]},"prefix":{"name":"prefix","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"name prefix for any created local pool","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"subtree":{"name":"subtree","type":"str","level":"advanced","flags":1,"default_value":"rack","min":"","max":"","enum_allowed":[],"desc":"CRUSH level for which to create a local pool","long_desc":"which CRUSH subtree type the module should create a pool for.","tags":[],"see_also":[]}}},{"name":"mds_autoscaler","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"mirroring","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"nfs","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"orchestrator","can_run":true,"error_string":"","module_options":{"fail_fs":{"name":"fail_fs","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Fail filesystem for rapid multi-rank mds upgrade","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"orchestrator":{"name":"orchestrator","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["cephadm","rook","test_orchestrator"],"desc":"Orchestrator backend","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"osd_perf_query","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"osd_support","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"pg_autoscaler","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sleep_interval":{"name":"sleep_interval","type":"secs","level":"advanced","flags":0,"default_value":"60","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"threshold":{"name":"threshold","type":"float","level":"advanced","flags":0,"default_value":"3.0","min":"1.0","max":"","enum_allowed":[],"desc":"scaling threshold","long_desc":"The factor by which the `NEW PG_NUM` must vary from the current`PG_NUM` before being accepted. Cannot be less than 1.0","tags":[],"see_also":[]}}},{"name":"progress","can_run":true,"error_string":"","module_options":{"allow_pg_recovery_event":{"name":"allow_pg_recovery_event","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"allow the module to show pg recovery progress","long_desc":"","tags":[],"see_also":[]},"enabled":{"name":"enabled","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"max_completed_events":{"name":"max_completed_events","type":"int","level":"advanced","flags":1,"default_value":"50","min":"","max":"","enum_allowed":[],"desc":"number of past completed events to remember","long_desc":"","tags":[],"see_also":[]},"sleep_interval":{"name":"sleep_interval","type":"secs","level":"advanced","flags":1,"default_value":"5","min":"","max":"","enum_allowed":[],"desc":"how long the module is going to sleep","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"prometheus","can_run":true,"error_string":"","module_options":{"cache":{"name":"cache","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"exclude_perf_counters":{"name":"exclude_perf_counters","type":"bool","level":"advanced","flags":1,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Do not include perf-counters in the metrics output","long_desc":"Gathering perf-counters from a single Prometheus exporter can degrade ceph-mgr performance, especially in large clusters. Instead, Ceph-exporter daemons are now used by default for perf-counter gathering. This should only be disabled when no ceph-exporters are deployed.","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rbd_stats_pools":{"name":"rbd_stats_pools","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rbd_stats_pools_refresh_interval":{"name":"rbd_stats_pools_refresh_interval","type":"int","level":"advanced","flags":0,"default_value":"300","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"scrape_interval":{"name":"scrape_interval","type":"float","level":"advanced","flags":0,"default_value":"15.0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"server_addr":{"name":"server_addr","type":"str","level":"advanced","flags":0,"default_value":"::","min":"","max":"","enum_allowed":[],"desc":"the IPv4 or IPv6 address on which the module listens for HTTP requests","long_desc":"","tags":[],"see_also":[]},"server_port":{"name":"server_port","type":"int","level":"advanced","flags":1,"default_value":"9283","min":"","max":"","enum_allowed":[],"desc":"the port on which the module listens for HTTP requests","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"stale_cache_strategy":{"name":"stale_cache_strategy","type":"str","level":"advanced","flags":0,"default_value":"log","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"standby_behaviour":{"name":"standby_behaviour","type":"str","level":"advanced","flags":1,"default_value":"default","min":"","max":"","enum_allowed":["default","error"],"desc":"","long_desc":"","tags":[],"see_also":[]},"standby_error_status_code":{"name":"standby_error_status_code","type":"int","level":"advanced","flags":1,"default_value":"500","min":"400","max":"599","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"rbd_support","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"max_concurrent_snap_create":{"name":"max_concurrent_snap_create","type":"int","level":"advanced","flags":0,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"mirror_snapshot_schedule":{"name":"mirror_snapshot_schedule","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"trash_purge_schedule":{"name":"trash_purge_schedule","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"restful","can_run":true,"error_string":"","module_options":{"enable_auth":{"name":"enable_auth","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"key_file":{"name":"key_file","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"max_requests":{"name":"max_requests","type":"int","level":"advanced","flags":0,"default_value":"500","min":"","max":"","enum_allowed":[],"desc":"Maximum number of requests to keep in memory. When new request comes in, the oldest request will be removed if the number of requests exceeds the max request number. if un-finished request is removed, error message will be logged in the ceph-mgr log.","long_desc":"","tags":[],"see_also":[]},"server_addr":{"name":"server_addr","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"server_port":{"name":"server_port","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"rgw","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"secondary_zone_period_retry_limit":{"name":"secondary_zone_period_retry_limit","type":"int","level":"advanced","flags":0,"default_value":"5","min":"","max":"","enum_allowed":[],"desc":"RGW module period update retry limit for secondary site","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"rook","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"storage_class":{"name":"storage_class","type":"str","level":"advanced","flags":0,"default_value":"local","min":"","max":"","enum_allowed":[],"desc":"storage class name for LSO-discovered PVs","long_desc":"","tags":[],"see_also":[]}}},{"name":"selftest","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"roption1":{"name":"roption1","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"roption2":{"name":"roption2","type":"str","level":"advanced","flags":0,"default_value":"xyz","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption1":{"name":"rwoption1","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption2":{"name":"rwoption2","type":"int","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption3":{"name":"rwoption3","type":"float","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption4":{"name":"rwoption4","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption5":{"name":"rwoption5","type":"bool","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption6":{"name":"rwoption6","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption7":{"name":"rwoption7","type":"int","level":"advanced","flags":0,"default_value":"","min":"1","max":"42","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"testkey":{"name":"testkey","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"testlkey":{"name":"testlkey","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"testnewline":{"name":"testnewline","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"snap_schedule","can_run":true,"error_string":"","module_options":{"allow_m_granularity":{"name":"allow_m_granularity","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"allow minute scheduled snapshots","long_desc":"","tags":[],"see_also":[]},"dump_on_update":{"name":"dump_on_update","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"dump database to debug log on update","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"stats","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"status","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"telegraf","can_run":true,"error_string":"","module_options":{"address":{"name":"address","type":"str","level":"advanced","flags":0,"default_value":"unixgram:///tmp/telegraf.sock","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"interval":{"name":"interval","type":"secs","level":"advanced","flags":0,"default_value":"15","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"telemetry","can_run":true,"error_string":"","module_options":{"channel_basic":{"name":"channel_basic","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Share basic cluster information (size, version)","long_desc":"","tags":[],"see_also":[]},"channel_crash":{"name":"channel_crash","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Share metadata about Ceph daemon crashes (version, stack straces, etc)","long_desc":"","tags":[],"see_also":[]},"channel_device":{"name":"channel_device","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Share device health metrics (e.g., SMART data, minus potentially identifying info like serial numbers)","long_desc":"","tags":[],"see_also":[]},"channel_ident":{"name":"channel_ident","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Share a user-provided description and/or contact email for the cluster","long_desc":"","tags":[],"see_also":[]},"channel_perf":{"name":"channel_perf","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Share various performance metrics of a cluster","long_desc":"","tags":[],"see_also":[]},"contact":{"name":"contact","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"description":{"name":"description","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"device_url":{"name":"device_url","type":"str","level":"advanced","flags":0,"default_value":"https://telemetry.ceph.com/device","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"enabled":{"name":"enabled","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"interval":{"name":"interval","type":"int","level":"advanced","flags":0,"default_value":"24","min":"8","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"last_opt_revision":{"name":"last_opt_revision","type":"int","level":"advanced","flags":0,"default_value":"1","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"leaderboard":{"name":"leaderboard","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"leaderboard_description":{"name":"leaderboard_description","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"organization":{"name":"organization","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"proxy":{"name":"proxy","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"url":{"name":"url","type":"str","level":"advanced","flags":0,"default_value":"https://telemetry.ceph.com/report","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"test_orchestrator","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"volumes","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"max_concurrent_clones":{"name":"max_concurrent_clones","type":"int","level":"advanced","flags":0,"default_value":"4","min":"","max":"","enum_allowed":[],"desc":"Number of asynchronous cloner threads","long_desc":"","tags":[],"see_also":[]},"periodic_async_work":{"name":"periodic_async_work","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Periodically check for async work","long_desc":"","tags":[],"see_also":[]},"snapshot_clone_delay":{"name":"snapshot_clone_delay","type":"int","level":"advanced","flags":0,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"Delay clone begin operation by snapshot_clone_delay seconds","long_desc":"","tags":[],"see_also":[]},"snapshot_clone_no_wait":{"name":"snapshot_clone_no_wait","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Reject subvolume clone request when cloner threads are busy","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"zabbix","can_run":true,"error_string":"","module_options":{"discovery_interval":{"name":"discovery_interval","type":"uint","level":"advanced","flags":0,"default_value":"100","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"identifier":{"name":"identifier","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"interval":{"name":"interval","type":"secs","level":"advanced","flags":0,"default_value":"60","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"zabbix_host":{"name":"zabbix_host","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"zabbix_port":{"name":"zabbix_port","type":"int","level":"advanced","flags":0,"default_value":"10051","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"zabbix_sender":{"name":"zabbix_sender","type":"str","level":"advanced","flags":0,"default_value":"/usr/bin/zabbix_sender","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}}]}],"modules":["cephadm","dashboard","iostat","nfs","restful"],"available_modules":[{"name":"alerts","can_run":true,"error_string":"","module_options":{"interval":{"name":"interval","type":"secs","level":"advanced","flags":1,"default_value":"60","min":"","max":"","enum_allowed":[],"desc":"How frequently to reexamine health status","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"smtp_destination":{"name":"smtp_destination","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"Email address to send alerts to","long_desc":"","tags":[],"see_also":[]},"smtp_from_name":{"name":"smtp_from_name","type":"str","level":"advanced","flags":1,"default_value":"Ceph","min":"","max":"","enum_allowed":[],"desc":"Email From: name","long_desc":"","tags":[],"see_also":[]},"smtp_host":{"name":"smtp_host","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"SMTP server","long_desc":"","tags":[],"see_also":[]},"smtp_password":{"name":"smtp_password","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"Password to authenticate with","long_desc":"","tags":[],"see_also":[]},"smtp_port":{"name":"smtp_port","type":"int","level":"advanced","flags":1,"default_value":"465","min":"","max":"","enum_allowed":[],"desc":"SMTP port","long_desc":"","tags":[],"see_also":[]},"smtp_sender":{"name":"smtp_sender","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"SMTP envelope sender","long_desc":"","tags":[],"see_also":[]},"smtp_ssl":{"name":"smtp_ssl","type":"bool","level":"advanced","flags":1,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Use SSL to connect to SMTP server","long_desc":"","tags":[],"see_also":[]},"smtp_user":{"name":"smtp_user","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"User to authenticate as","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"balancer","can_run":true,"error_string":"","module_options":{"active":{"name":"active","type":"bool","level":"advanced","flags":1,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"automatically balance PGs across cluster","long_desc":"","tags":[],"see_also":[]},"begin_time":{"name":"begin_time","type":"str","level":"advanced","flags":1,"default_value":"0000","min":"","max":"","enum_allowed":[],"desc":"beginning time of day to automatically balance","long_desc":"This is a time of day in the format HHMM.","tags":[],"see_also":[]},"begin_weekday":{"name":"begin_weekday","type":"uint","level":"advanced","flags":1,"default_value":"0","min":"0","max":"6","enum_allowed":[],"desc":"Restrict automatic balancing to this day of the week or later","long_desc":"0 = Sunday, 1 = Monday, etc.","tags":[],"see_also":[]},"crush_compat_max_iterations":{"name":"crush_compat_max_iterations","type":"uint","level":"advanced","flags":1,"default_value":"25","min":"1","max":"250","enum_allowed":[],"desc":"maximum number of iterations to attempt optimization","long_desc":"","tags":[],"see_also":[]},"crush_compat_metrics":{"name":"crush_compat_metrics","type":"str","level":"advanced","flags":1,"default_value":"pgs,objects,bytes","min":"","max":"","enum_allowed":[],"desc":"metrics with which to calculate OSD utilization","long_desc":"Value is a list of one or more of \"pgs\", \"objects\", or \"bytes\", and indicates which metrics to use to balance utilization.","tags":[],"see_also":[]},"crush_compat_step":{"name":"crush_compat_step","type":"float","level":"advanced","flags":1,"default_value":"0.5","min":"0.001","max":"0.999","enum_allowed":[],"desc":"aggressiveness of optimization","long_desc":".99 is very aggressive, .01 is less aggressive","tags":[],"see_also":[]},"end_time":{"name":"end_time","type":"str","level":"advanced","flags":1,"default_value":"2359","min":"","max":"","enum_allowed":[],"desc":"ending time of day to automatically balance","long_desc":"This is a time of day in the format HHMM.","tags":[],"see_also":[]},"end_weekday":{"name":"end_weekday","type":"uint","level":"advanced","flags":1,"default_value":"0","min":"0","max":"6","enum_allowed":[],"desc":"Restrict automatic balancing to days of the week earlier than this","long_desc":"0 = Sunday, 1 = Monday, etc.","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"min_score":{"name":"min_score","type":"float","level":"advanced","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"minimum score, below which no optimization is attempted","long_desc":"","tags":[],"see_also":[]},"mode":{"name":"mode","type":"str","level":"advanced","flags":1,"default_value":"upmap","min":"","max":"","enum_allowed":["crush-compat","none","read","upmap","upmap-read"],"desc":"Balancer mode","long_desc":"","tags":[],"see_also":[]},"pool_ids":{"name":"pool_ids","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"pools which the automatic balancing will be limited to","long_desc":"","tags":[],"see_also":[]},"sleep_interval":{"name":"sleep_interval","type":"secs","level":"advanced","flags":1,"default_value":"60","min":"","max":"","enum_allowed":[],"desc":"how frequently to wake up and attempt optimization","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"update_pg_upmap_activity":{"name":"update_pg_upmap_activity","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Updates pg_upmap activity stats to be used in `balancer status detail`","long_desc":"","tags":[],"see_also":[]},"upmap_max_deviation":{"name":"upmap_max_deviation","type":"int","level":"advanced","flags":1,"default_value":"5","min":"1","max":"","enum_allowed":[],"desc":"deviation below which no optimization is attempted","long_desc":"If the number of PGs are within this count then no optimization is attempted","tags":[],"see_also":[]},"upmap_max_optimizations":{"name":"upmap_max_optimizations","type":"uint","level":"advanced","flags":1,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"maximum upmap optimizations to make per attempt","long_desc":"","tags":[],"see_also":[]}}},{"name":"cephadm","can_run":true,"error_string":"","module_options":{"agent_down_multiplier":{"name":"agent_down_multiplier","type":"float","level":"advanced","flags":0,"default_value":"3.0","min":"","max":"","enum_allowed":[],"desc":"Multiplied by agent refresh rate to calculate how long agent must not report before being marked down","long_desc":"","tags":[],"see_also":[]},"agent_refresh_rate":{"name":"agent_refresh_rate","type":"secs","level":"advanced","flags":0,"default_value":"20","min":"","max":"","enum_allowed":[],"desc":"How often agent on each host will try to gather and send metadata","long_desc":"","tags":[],"see_also":[]},"agent_starting_port":{"name":"agent_starting_port","type":"int","level":"advanced","flags":0,"default_value":"4721","min":"","max":"","enum_allowed":[],"desc":"First port agent will try to bind to (will also try up to next 1000 subsequent ports if blocked)","long_desc":"","tags":[],"see_also":[]},"allow_ptrace":{"name":"allow_ptrace","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"allow SYS_PTRACE capability on ceph containers","long_desc":"The SYS_PTRACE capability is needed to attach to a process with gdb or strace. Enabling this options can allow debugging daemons that encounter problems at runtime.","tags":[],"see_also":[]},"autotune_interval":{"name":"autotune_interval","type":"secs","level":"advanced","flags":0,"default_value":"600","min":"","max":"","enum_allowed":[],"desc":"how frequently to autotune daemon memory","long_desc":"","tags":[],"see_also":[]},"autotune_memory_target_ratio":{"name":"autotune_memory_target_ratio","type":"float","level":"advanced","flags":0,"default_value":"0.7","min":"","max":"","enum_allowed":[],"desc":"ratio of total system memory to divide amongst autotuned daemons","long_desc":"","tags":[],"see_also":[]},"cephadm_log_destination":{"name":"cephadm_log_destination","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":["file","file,syslog","syslog"],"desc":"Destination for cephadm command's persistent logging","long_desc":"","tags":[],"see_also":[]},"cgroups_split":{"name":"cgroups_split","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Pass --cgroups=split when cephadm creates containers (currently podman only)","long_desc":"","tags":[],"see_also":[]},"config_checks_enabled":{"name":"config_checks_enabled","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Enable or disable the cephadm configuration analysis","long_desc":"","tags":[],"see_also":[]},"config_dashboard":{"name":"config_dashboard","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"manage configs like API endpoints in Dashboard.","long_desc":"","tags":[],"see_also":[]},"container_image_alertmanager":{"name":"container_image_alertmanager","type":"str","level":"advanced","flags":0,"default_value":"quay.io/prometheus/alertmanager:v0.25.0","min":"","max":"","enum_allowed":[],"desc":"Prometheus container image","long_desc":"","tags":[],"see_also":[]},"container_image_base":{"name":"container_image_base","type":"str","level":"advanced","flags":1,"default_value":"quay.io/ceph/ceph","min":"","max":"","enum_allowed":[],"desc":"Container image name, without the tag","long_desc":"","tags":[],"see_also":[]},"container_image_elasticsearch":{"name":"container_image_elasticsearch","type":"str","level":"advanced","flags":0,"default_value":"quay.io/omrizeneva/elasticsearch:6.8.23","min":"","max":"","enum_allowed":[],"desc":"elasticsearch container image","long_desc":"","tags":[],"see_also":[]},"container_image_grafana":{"name":"container_image_grafana","type":"str","level":"advanced","flags":0,"default_value":"quay.io/ceph/grafana:10.4.0","min":"","max":"","enum_allowed":[],"desc":"Prometheus container image","long_desc":"","tags":[],"see_also":[]},"container_image_haproxy":{"name":"container_image_haproxy","type":"str","level":"advanced","flags":0,"default_value":"quay.io/ceph/haproxy:2.3","min":"","max":"","enum_allowed":[],"desc":"HAproxy container image","long_desc":"","tags":[],"see_also":[]},"container_image_jaeger_agent":{"name":"container_image_jaeger_agent","type":"str","level":"advanced","flags":0,"default_value":"quay.io/jaegertracing/jaeger-agent:1.29","min":"","max":"","enum_allowed":[],"desc":"Jaeger agent container image","long_desc":"","tags":[],"see_also":[]},"container_image_jaeger_collector":{"name":"container_image_jaeger_collector","type":"str","level":"advanced","flags":0,"default_value":"quay.io/jaegertracing/jaeger-collector:1.29","min":"","max":"","enum_allowed":[],"desc":"Jaeger collector container image","long_desc":"","tags":[],"see_also":[]},"container_image_jaeger_query":{"name":"container_image_jaeger_query","type":"str","level":"advanced","flags":0,"default_value":"quay.io/jaegertracing/jaeger-query:1.29","min":"","max":"","enum_allowed":[],"desc":"Jaeger query container image","long_desc":"","tags":[],"see_also":[]},"container_image_keepalived":{"name":"container_image_keepalived","type":"str","level":"advanced","flags":0,"default_value":"quay.io/ceph/keepalived:2.2.4","min":"","max":"","enum_allowed":[],"desc":"Keepalived container image","long_desc":"","tags":[],"see_also":[]},"container_image_loki":{"name":"container_image_loki","type":"str","level":"advanced","flags":0,"default_value":"quay.io/ceph/loki:3.0.0","min":"","max":"","enum_allowed":[],"desc":"Loki container image","long_desc":"","tags":[],"see_also":[]},"container_image_node_exporter":{"name":"container_image_node_exporter","type":"str","level":"advanced","flags":0,"default_value":"quay.io/prometheus/node-exporter:v1.7.0","min":"","max":"","enum_allowed":[],"desc":"Prometheus container image","long_desc":"","tags":[],"see_also":[]},"container_image_nvmeof":{"name":"container_image_nvmeof","type":"str","level":"advanced","flags":0,"default_value":"quay.io/ceph/nvmeof:1.2.5","min":"","max":"","enum_allowed":[],"desc":"Nvme-of container image","long_desc":"","tags":[],"see_also":[]},"container_image_prometheus":{"name":"container_image_prometheus","type":"str","level":"advanced","flags":0,"default_value":"quay.io/prometheus/prometheus:v2.51.0","min":"","max":"","enum_allowed":[],"desc":"Prometheus container image","long_desc":"","tags":[],"see_also":[]},"container_image_promtail":{"name":"container_image_promtail","type":"str","level":"advanced","flags":0,"default_value":"quay.io/ceph/promtail:3.0.0","min":"","max":"","enum_allowed":[],"desc":"Promtail container image","long_desc":"","tags":[],"see_also":[]},"container_image_samba":{"name":"container_image_samba","type":"str","level":"advanced","flags":0,"default_value":"quay.io/samba.org/samba-server:devbuilds-centos-amd64","min":"","max":"","enum_allowed":[],"desc":"Samba/SMB container image","long_desc":"","tags":[],"see_also":[]},"container_image_snmp_gateway":{"name":"container_image_snmp_gateway","type":"str","level":"advanced","flags":0,"default_value":"quay.io/ceph/snmp-notifier:v1.2.1","min":"","max":"","enum_allowed":[],"desc":"SNMP Gateway container image","long_desc":"","tags":[],"see_also":[]},"container_init":{"name":"container_init","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Run podman/docker with `--init`","long_desc":"","tags":[],"see_also":[]},"daemon_cache_timeout":{"name":"daemon_cache_timeout","type":"secs","level":"advanced","flags":0,"default_value":"600","min":"","max":"","enum_allowed":[],"desc":"seconds to cache service (daemon) inventory","long_desc":"","tags":[],"see_also":[]},"default_cephadm_command_timeout":{"name":"default_cephadm_command_timeout","type":"int","level":"advanced","flags":0,"default_value":"900","min":"","max":"","enum_allowed":[],"desc":"Default timeout applied to cephadm commands run directly on the host (in seconds)","long_desc":"","tags":[],"see_also":[]},"default_registry":{"name":"default_registry","type":"str","level":"advanced","flags":0,"default_value":"quay.io","min":"","max":"","enum_allowed":[],"desc":"Search-registry to which we should normalize unqualified image names. This is not the default registry","long_desc":"","tags":[],"see_also":[]},"device_cache_timeout":{"name":"device_cache_timeout","type":"secs","level":"advanced","flags":0,"default_value":"1800","min":"","max":"","enum_allowed":[],"desc":"seconds to cache device inventory","long_desc":"","tags":[],"see_also":[]},"device_enhanced_scan":{"name":"device_enhanced_scan","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Use libstoragemgmt during device scans","long_desc":"","tags":[],"see_also":[]},"facts_cache_timeout":{"name":"facts_cache_timeout","type":"secs","level":"advanced","flags":0,"default_value":"60","min":"","max":"","enum_allowed":[],"desc":"seconds to cache host facts data","long_desc":"","tags":[],"see_also":[]},"grafana_dashboards_path":{"name":"grafana_dashboards_path","type":"str","level":"advanced","flags":0,"default_value":"/etc/grafana/dashboards/ceph-dashboard/","min":"","max":"","enum_allowed":[],"desc":"location of dashboards to include in grafana deployments","long_desc":"","tags":[],"see_also":[]},"host_check_interval":{"name":"host_check_interval","type":"secs","level":"advanced","flags":0,"default_value":"600","min":"","max":"","enum_allowed":[],"desc":"how frequently to perform a host check","long_desc":"","tags":[],"see_also":[]},"hw_monitoring":{"name":"hw_monitoring","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Deploy hw monitoring daemon on every host.","long_desc":"","tags":[],"see_also":[]},"inventory_list_all":{"name":"inventory_list_all","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Whether ceph-volume inventory should report more devices (mostly mappers (LVs / mpaths), partitions...)","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_refresh_metadata":{"name":"log_refresh_metadata","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Log all refresh metadata. Includes daemon, device, and host info collected regularly. Only has effect if logging at debug level","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"log to the \"cephadm\" cluster log channel\"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"manage_etc_ceph_ceph_conf":{"name":"manage_etc_ceph_ceph_conf","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Manage and own /etc/ceph/ceph.conf on the hosts.","long_desc":"","tags":[],"see_also":[]},"manage_etc_ceph_ceph_conf_hosts":{"name":"manage_etc_ceph_ceph_conf_hosts","type":"str","level":"advanced","flags":0,"default_value":"*","min":"","max":"","enum_allowed":[],"desc":"PlacementSpec describing on which hosts to manage /etc/ceph/ceph.conf","long_desc":"","tags":[],"see_also":[]},"max_count_per_host":{"name":"max_count_per_host","type":"int","level":"advanced","flags":0,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"max number of daemons per service per host","long_desc":"","tags":[],"see_also":[]},"max_osd_draining_count":{"name":"max_osd_draining_count","type":"int","level":"advanced","flags":0,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"max number of osds that will be drained simultaneously when osds are removed","long_desc":"","tags":[],"see_also":[]},"migration_current":{"name":"migration_current","type":"int","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"internal - do not modify","long_desc":"","tags":[],"see_also":[]},"mode":{"name":"mode","type":"str","level":"advanced","flags":0,"default_value":"root","min":"","max":"","enum_allowed":["cephadm-package","root"],"desc":"mode for remote execution of cephadm","long_desc":"","tags":[],"see_also":[]},"oob_default_addr":{"name":"oob_default_addr","type":"str","level":"advanced","flags":0,"default_value":"169.254.1.1","min":"","max":"","enum_allowed":[],"desc":"Default address for RedFish API (oob management).","long_desc":"","tags":[],"see_also":[]},"prometheus_alerts_path":{"name":"prometheus_alerts_path","type":"str","level":"advanced","flags":0,"default_value":"/etc/prometheus/ceph/ceph_default_alerts.yml","min":"","max":"","enum_allowed":[],"desc":"location of alerts to include in prometheus deployments","long_desc":"","tags":[],"see_also":[]},"registry_insecure":{"name":"registry_insecure","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Registry is to be considered insecure (no TLS available). Only for development purposes.","long_desc":"","tags":[],"see_also":[]},"registry_password":{"name":"registry_password","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"Custom repository password. Only used for logging into a registry.","long_desc":"","tags":[],"see_also":[]},"registry_url":{"name":"registry_url","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"Registry url for login purposes. This is not the default registry","long_desc":"","tags":[],"see_also":[]},"registry_username":{"name":"registry_username","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"Custom repository username. Only used for logging into a registry.","long_desc":"","tags":[],"see_also":[]},"secure_monitoring_stack":{"name":"secure_monitoring_stack","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Enable TLS security for all the monitoring stack daemons","long_desc":"","tags":[],"see_also":[]},"service_discovery_port":{"name":"service_discovery_port","type":"int","level":"advanced","flags":0,"default_value":"8765","min":"","max":"","enum_allowed":[],"desc":"cephadm service discovery port","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ssh_config_file":{"name":"ssh_config_file","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"customized SSH config file to connect to managed hosts","long_desc":"","tags":[],"see_also":[]},"ssh_keepalive_count_max":{"name":"ssh_keepalive_count_max","type":"int","level":"advanced","flags":0,"default_value":"3","min":"","max":"","enum_allowed":[],"desc":"How many times ssh connections can fail liveness checks before the host is marked offline","long_desc":"","tags":[],"see_also":[]},"ssh_keepalive_interval":{"name":"ssh_keepalive_interval","type":"int","level":"advanced","flags":0,"default_value":"7","min":"","max":"","enum_allowed":[],"desc":"How often ssh connections are checked for liveness","long_desc":"","tags":[],"see_also":[]},"use_agent":{"name":"use_agent","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Use cephadm agent on each host to gather and send metadata","long_desc":"","tags":[],"see_also":[]},"use_repo_digest":{"name":"use_repo_digest","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Automatically convert image tags to image digest. Make sure all daemons use the same image","long_desc":"","tags":[],"see_also":[]},"warn_on_failed_host_check":{"name":"warn_on_failed_host_check","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"raise a health warning if the host check fails","long_desc":"","tags":[],"see_also":[]},"warn_on_stray_daemons":{"name":"warn_on_stray_daemons","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"raise a health warning if daemons are detected that are not managed by cephadm","long_desc":"","tags":[],"see_also":[]},"warn_on_stray_hosts":{"name":"warn_on_stray_hosts","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"raise a health warning if daemons are detected on a host that is not managed by cephadm","long_desc":"","tags":[],"see_also":[]}}},{"name":"crash","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"retain_interval":{"name":"retain_interval","type":"secs","level":"advanced","flags":1,"default_value":"31536000","min":"","max":"","enum_allowed":[],"desc":"how long to retain crashes before pruning them","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"warn_recent_interval":{"name":"warn_recent_interval","type":"secs","level":"advanced","flags":1,"default_value":"1209600","min":"","max":"","enum_allowed":[],"desc":"time interval in which to warn about recent crashes","long_desc":"","tags":[],"see_also":[]}}},{"name":"dashboard","can_run":true,"error_string":"","module_options":{"ACCOUNT_LOCKOUT_ATTEMPTS":{"name":"ACCOUNT_LOCKOUT_ATTEMPTS","type":"int","level":"advanced","flags":0,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ALERTMANAGER_API_HOST":{"name":"ALERTMANAGER_API_HOST","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ALERTMANAGER_API_SSL_VERIFY":{"name":"ALERTMANAGER_API_SSL_VERIFY","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"AUDIT_API_ENABLED":{"name":"AUDIT_API_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"AUDIT_API_LOG_PAYLOAD":{"name":"AUDIT_API_LOG_PAYLOAD","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ENABLE_BROWSABLE_API":{"name":"ENABLE_BROWSABLE_API","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_CEPHFS":{"name":"FEATURE_TOGGLE_CEPHFS","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_DASHBOARD":{"name":"FEATURE_TOGGLE_DASHBOARD","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_ISCSI":{"name":"FEATURE_TOGGLE_ISCSI","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_MIRRORING":{"name":"FEATURE_TOGGLE_MIRRORING","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_NFS":{"name":"FEATURE_TOGGLE_NFS","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_RBD":{"name":"FEATURE_TOGGLE_RBD","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_RGW":{"name":"FEATURE_TOGGLE_RGW","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GANESHA_CLUSTERS_RADOS_POOL_NAMESPACE":{"name":"GANESHA_CLUSTERS_RADOS_POOL_NAMESPACE","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GRAFANA_API_PASSWORD":{"name":"GRAFANA_API_PASSWORD","type":"str","level":"advanced","flags":0,"default_value":"admin","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GRAFANA_API_SSL_VERIFY":{"name":"GRAFANA_API_SSL_VERIFY","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GRAFANA_API_URL":{"name":"GRAFANA_API_URL","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GRAFANA_API_USERNAME":{"name":"GRAFANA_API_USERNAME","type":"str","level":"advanced","flags":0,"default_value":"admin","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GRAFANA_FRONTEND_API_URL":{"name":"GRAFANA_FRONTEND_API_URL","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GRAFANA_UPDATE_DASHBOARDS":{"name":"GRAFANA_UPDATE_DASHBOARDS","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ISCSI_API_SSL_VERIFICATION":{"name":"ISCSI_API_SSL_VERIFICATION","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ISSUE_TRACKER_API_KEY":{"name":"ISSUE_TRACKER_API_KEY","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PROMETHEUS_API_HOST":{"name":"PROMETHEUS_API_HOST","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PROMETHEUS_API_SSL_VERIFY":{"name":"PROMETHEUS_API_SSL_VERIFY","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_COMPLEXITY_ENABLED":{"name":"PWD_POLICY_CHECK_COMPLEXITY_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_EXCLUSION_LIST_ENABLED":{"name":"PWD_POLICY_CHECK_EXCLUSION_LIST_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_LENGTH_ENABLED":{"name":"PWD_POLICY_CHECK_LENGTH_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_OLDPWD_ENABLED":{"name":"PWD_POLICY_CHECK_OLDPWD_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_REPETITIVE_CHARS_ENABLED":{"name":"PWD_POLICY_CHECK_REPETITIVE_CHARS_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_SEQUENTIAL_CHARS_ENABLED":{"name":"PWD_POLICY_CHECK_SEQUENTIAL_CHARS_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_USERNAME_ENABLED":{"name":"PWD_POLICY_CHECK_USERNAME_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_ENABLED":{"name":"PWD_POLICY_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_EXCLUSION_LIST":{"name":"PWD_POLICY_EXCLUSION_LIST","type":"str","level":"advanced","flags":0,"default_value":"osd,host,dashboard,pool,block,nfs,ceph,monitors,gateway,logs,crush,maps","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_MIN_COMPLEXITY":{"name":"PWD_POLICY_MIN_COMPLEXITY","type":"int","level":"advanced","flags":0,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_MIN_LENGTH":{"name":"PWD_POLICY_MIN_LENGTH","type":"int","level":"advanced","flags":0,"default_value":"8","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"REST_REQUESTS_TIMEOUT":{"name":"REST_REQUESTS_TIMEOUT","type":"int","level":"advanced","flags":0,"default_value":"45","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"RGW_API_ACCESS_KEY":{"name":"RGW_API_ACCESS_KEY","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"RGW_API_ADMIN_RESOURCE":{"name":"RGW_API_ADMIN_RESOURCE","type":"str","level":"advanced","flags":0,"default_value":"admin","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"RGW_API_SECRET_KEY":{"name":"RGW_API_SECRET_KEY","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"RGW_API_SSL_VERIFY":{"name":"RGW_API_SSL_VERIFY","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"USER_PWD_EXPIRATION_SPAN":{"name":"USER_PWD_EXPIRATION_SPAN","type":"int","level":"advanced","flags":0,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"USER_PWD_EXPIRATION_WARNING_1":{"name":"USER_PWD_EXPIRATION_WARNING_1","type":"int","level":"advanced","flags":0,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"USER_PWD_EXPIRATION_WARNING_2":{"name":"USER_PWD_EXPIRATION_WARNING_2","type":"int","level":"advanced","flags":0,"default_value":"5","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"cross_origin_url":{"name":"cross_origin_url","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"crt_file":{"name":"crt_file","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"debug":{"name":"debug","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Enable/disable debug options","long_desc":"","tags":[],"see_also":[]},"jwt_token_ttl":{"name":"jwt_token_ttl","type":"int","level":"advanced","flags":0,"default_value":"28800","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"key_file":{"name":"key_file","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"motd":{"name":"motd","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"The message of the day","long_desc":"","tags":[],"see_also":[]},"redirect_resolve_ip_addr":{"name":"redirect_resolve_ip_addr","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"server_addr":{"name":"server_addr","type":"str","level":"advanced","flags":0,"default_value":"::","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"server_port":{"name":"server_port","type":"int","level":"advanced","flags":0,"default_value":"8080","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ssl":{"name":"ssl","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ssl_server_port":{"name":"ssl_server_port","type":"int","level":"advanced","flags":0,"default_value":"8443","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"standby_behaviour":{"name":"standby_behaviour","type":"str","level":"advanced","flags":0,"default_value":"redirect","min":"","max":"","enum_allowed":["error","redirect"],"desc":"","long_desc":"","tags":[],"see_also":[]},"standby_error_status_code":{"name":"standby_error_status_code","type":"int","level":"advanced","flags":0,"default_value":"500","min":"400","max":"599","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"url_prefix":{"name":"url_prefix","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"devicehealth","can_run":true,"error_string":"","module_options":{"enable_monitoring":{"name":"enable_monitoring","type":"bool","level":"advanced","flags":1,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"monitor device health metrics","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"mark_out_threshold":{"name":"mark_out_threshold","type":"secs","level":"advanced","flags":1,"default_value":"2419200","min":"","max":"","enum_allowed":[],"desc":"automatically mark OSD if it may fail before this long","long_desc":"","tags":[],"see_also":[]},"pool_name":{"name":"pool_name","type":"str","level":"advanced","flags":1,"default_value":"device_health_metrics","min":"","max":"","enum_allowed":[],"desc":"name of pool in which to store device health metrics","long_desc":"","tags":[],"see_also":[]},"retention_period":{"name":"retention_period","type":"secs","level":"advanced","flags":1,"default_value":"15552000","min":"","max":"","enum_allowed":[],"desc":"how long to retain device health metrics","long_desc":"","tags":[],"see_also":[]},"scrape_frequency":{"name":"scrape_frequency","type":"secs","level":"advanced","flags":1,"default_value":"86400","min":"","max":"","enum_allowed":[],"desc":"how frequently to scrape device health metrics","long_desc":"","tags":[],"see_also":[]},"self_heal":{"name":"self_heal","type":"bool","level":"advanced","flags":1,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"preemptively heal cluster around devices that may fail","long_desc":"","tags":[],"see_also":[]},"sleep_interval":{"name":"sleep_interval","type":"secs","level":"advanced","flags":1,"default_value":"600","min":"","max":"","enum_allowed":[],"desc":"how frequently to wake up and check device health","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"warn_threshold":{"name":"warn_threshold","type":"secs","level":"advanced","flags":1,"default_value":"7257600","min":"","max":"","enum_allowed":[],"desc":"raise health warning if OSD may fail before this long","long_desc":"","tags":[],"see_also":[]}}},{"name":"diskprediction_local","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"predict_interval":{"name":"predict_interval","type":"str","level":"advanced","flags":0,"default_value":"86400","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"predictor_model":{"name":"predictor_model","type":"str","level":"advanced","flags":0,"default_value":"prophetstor","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sleep_interval":{"name":"sleep_interval","type":"str","level":"advanced","flags":0,"default_value":"600","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"influx","can_run":false,"error_string":"influxdb python module not found","module_options":{"batch_size":{"name":"batch_size","type":"int","level":"advanced","flags":0,"default_value":"5000","min":"","max":"","enum_allowed":[],"desc":"How big batches of data points should be when sending to InfluxDB.","long_desc":"","tags":[],"see_also":[]},"database":{"name":"database","type":"str","level":"advanced","flags":0,"default_value":"ceph","min":"","max":"","enum_allowed":[],"desc":"InfluxDB database name. You will need to create this database and grant write privileges to the configured username or the username must have admin privileges to create it.","long_desc":"","tags":[],"see_also":[]},"hostname":{"name":"hostname","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"InfluxDB server hostname","long_desc":"","tags":[],"see_also":[]},"interval":{"name":"interval","type":"secs","level":"advanced","flags":0,"default_value":"30","min":"5","max":"","enum_allowed":[],"desc":"Time between reports to InfluxDB. Default 30 seconds.","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"password":{"name":"password","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"password of InfluxDB server user","long_desc":"","tags":[],"see_also":[]},"port":{"name":"port","type":"int","level":"advanced","flags":0,"default_value":"8086","min":"","max":"","enum_allowed":[],"desc":"InfluxDB server port","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ssl":{"name":"ssl","type":"str","level":"advanced","flags":0,"default_value":"false","min":"","max":"","enum_allowed":[],"desc":"Use https connection for InfluxDB server. Use \"true\" or \"false\".","long_desc":"","tags":[],"see_also":[]},"threads":{"name":"threads","type":"int","level":"advanced","flags":0,"default_value":"5","min":"1","max":"32","enum_allowed":[],"desc":"How many worker threads should be spawned for sending data to InfluxDB.","long_desc":"","tags":[],"see_also":[]},"username":{"name":"username","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"username of InfluxDB server user","long_desc":"","tags":[],"see_also":[]},"verify_ssl":{"name":"verify_ssl","type":"str","level":"advanced","flags":0,"default_value":"true","min":"","max":"","enum_allowed":[],"desc":"Verify https cert for InfluxDB server. Use \"true\" or \"false\".","long_desc":"","tags":[],"see_also":[]}}},{"name":"insights","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"iostat","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"k8sevents","can_run":true,"error_string":"","module_options":{"ceph_event_retention_days":{"name":"ceph_event_retention_days","type":"int","level":"advanced","flags":0,"default_value":"7","min":"","max":"","enum_allowed":[],"desc":"Days to hold ceph event information within local cache","long_desc":"","tags":[],"see_also":[]},"config_check_secs":{"name":"config_check_secs","type":"int","level":"advanced","flags":0,"default_value":"10","min":"10","max":"","enum_allowed":[],"desc":"interval (secs) to check for cluster configuration changes","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"localpool","can_run":true,"error_string":"","module_options":{"failure_domain":{"name":"failure_domain","type":"str","level":"advanced","flags":1,"default_value":"host","min":"","max":"","enum_allowed":[],"desc":"failure domain for any created local pool","long_desc":"what failure domain we should separate data replicas across.","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"min_size":{"name":"min_size","type":"int","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"default min_size for any created local pool","long_desc":"value to set min_size to (unchanged from Ceph's default if this option is not set)","tags":[],"see_also":[]},"num_rep":{"name":"num_rep","type":"int","level":"advanced","flags":1,"default_value":"3","min":"","max":"","enum_allowed":[],"desc":"default replica count for any created local pool","long_desc":"","tags":[],"see_also":[]},"pg_num":{"name":"pg_num","type":"int","level":"advanced","flags":1,"default_value":"128","min":"","max":"","enum_allowed":[],"desc":"default pg_num for any created local pool","long_desc":"","tags":[],"see_also":[]},"prefix":{"name":"prefix","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"name prefix for any created local pool","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"subtree":{"name":"subtree","type":"str","level":"advanced","flags":1,"default_value":"rack","min":"","max":"","enum_allowed":[],"desc":"CRUSH level for which to create a local pool","long_desc":"which CRUSH subtree type the module should create a pool for.","tags":[],"see_also":[]}}},{"name":"mds_autoscaler","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"mirroring","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"nfs","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"orchestrator","can_run":true,"error_string":"","module_options":{"fail_fs":{"name":"fail_fs","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Fail filesystem for rapid multi-rank mds upgrade","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"orchestrator":{"name":"orchestrator","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["cephadm","rook","test_orchestrator"],"desc":"Orchestrator backend","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"osd_perf_query","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"osd_support","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"pg_autoscaler","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sleep_interval":{"name":"sleep_interval","type":"secs","level":"advanced","flags":0,"default_value":"60","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"threshold":{"name":"threshold","type":"float","level":"advanced","flags":0,"default_value":"3.0","min":"1.0","max":"","enum_allowed":[],"desc":"scaling threshold","long_desc":"The factor by which the `NEW PG_NUM` must vary from the current`PG_NUM` before being accepted. Cannot be less than 1.0","tags":[],"see_also":[]}}},{"name":"progress","can_run":true,"error_string":"","module_options":{"allow_pg_recovery_event":{"name":"allow_pg_recovery_event","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"allow the module to show pg recovery progress","long_desc":"","tags":[],"see_also":[]},"enabled":{"name":"enabled","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"max_completed_events":{"name":"max_completed_events","type":"int","level":"advanced","flags":1,"default_value":"50","min":"","max":"","enum_allowed":[],"desc":"number of past completed events to remember","long_desc":"","tags":[],"see_also":[]},"sleep_interval":{"name":"sleep_interval","type":"secs","level":"advanced","flags":1,"default_value":"5","min":"","max":"","enum_allowed":[],"desc":"how long the module is going to sleep","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"prometheus","can_run":true,"error_string":"","module_options":{"cache":{"name":"cache","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"exclude_perf_counters":{"name":"exclude_perf_counters","type":"bool","level":"advanced","flags":1,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Do not include perf-counters in the metrics output","long_desc":"Gathering perf-counters from a single Prometheus exporter can degrade ceph-mgr performance, especially in large clusters. Instead, Ceph-exporter daemons are now used by default for perf-counter gathering. This should only be disabled when no ceph-exporters are deployed.","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rbd_stats_pools":{"name":"rbd_stats_pools","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rbd_stats_pools_refresh_interval":{"name":"rbd_stats_pools_refresh_interval","type":"int","level":"advanced","flags":0,"default_value":"300","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"scrape_interval":{"name":"scrape_interval","type":"float","level":"advanced","flags":0,"default_value":"15.0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"server_addr":{"name":"server_addr","type":"str","level":"advanced","flags":0,"default_value":"::","min":"","max":"","enum_allowed":[],"desc":"the IPv4 or IPv6 address on which the module listens for HTTP requests","long_desc":"","tags":[],"see_also":[]},"server_port":{"name":"server_port","type":"int","level":"advanced","flags":1,"default_value":"9283","min":"","max":"","enum_allowed":[],"desc":"the port on which the module listens for HTTP requests","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"stale_cache_strategy":{"name":"stale_cache_strategy","type":"str","level":"advanced","flags":0,"default_value":"log","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"standby_behaviour":{"name":"standby_behaviour","type":"str","level":"advanced","flags":1,"default_value":"default","min":"","max":"","enum_allowed":["default","error"],"desc":"","long_desc":"","tags":[],"see_also":[]},"standby_error_status_code":{"name":"standby_error_status_code","type":"int","level":"advanced","flags":1,"default_value":"500","min":"400","max":"599","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"rbd_support","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"max_concurrent_snap_create":{"name":"max_concurrent_snap_create","type":"int","level":"advanced","flags":0,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"mirror_snapshot_schedule":{"name":"mirror_snapshot_schedule","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"trash_purge_schedule":{"name":"trash_purge_schedule","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"restful","can_run":true,"error_string":"","module_options":{"enable_auth":{"name":"enable_auth","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"key_file":{"name":"key_file","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"max_requests":{"name":"max_requests","type":"int","level":"advanced","flags":0,"default_value":"500","min":"","max":"","enum_allowed":[],"desc":"Maximum number of requests to keep in memory. When new request comes in, the oldest request will be removed if the number of requests exceeds the max request number. if un-finished request is removed, error message will be logged in the ceph-mgr log.","long_desc":"","tags":[],"see_also":[]},"server_addr":{"name":"server_addr","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"server_port":{"name":"server_port","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"rgw","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"secondary_zone_period_retry_limit":{"name":"secondary_zone_period_retry_limit","type":"int","level":"advanced","flags":0,"default_value":"5","min":"","max":"","enum_allowed":[],"desc":"RGW module period update retry limit for secondary site","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"rook","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"storage_class":{"name":"storage_class","type":"str","level":"advanced","flags":0,"default_value":"local","min":"","max":"","enum_allowed":[],"desc":"storage class name for LSO-discovered PVs","long_desc":"","tags":[],"see_also":[]}}},{"name":"selftest","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"roption1":{"name":"roption1","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"roption2":{"name":"roption2","type":"str","level":"advanced","flags":0,"default_value":"xyz","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption1":{"name":"rwoption1","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption2":{"name":"rwoption2","type":"int","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption3":{"name":"rwoption3","type":"float","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption4":{"name":"rwoption4","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption5":{"name":"rwoption5","type":"bool","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption6":{"name":"rwoption6","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption7":{"name":"rwoption7","type":"int","level":"advanced","flags":0,"default_value":"","min":"1","max":"42","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"testkey":{"name":"testkey","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"testlkey":{"name":"testlkey","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"testnewline":{"name":"testnewline","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"snap_schedule","can_run":true,"error_string":"","module_options":{"allow_m_granularity":{"name":"allow_m_granularity","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"allow minute scheduled snapshots","long_desc":"","tags":[],"see_also":[]},"dump_on_update":{"name":"dump_on_update","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"dump database to debug log on update","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"stats","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"status","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"telegraf","can_run":true,"error_string":"","module_options":{"address":{"name":"address","type":"str","level":"advanced","flags":0,"default_value":"unixgram:///tmp/telegraf.sock","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"interval":{"name":"interval","type":"secs","level":"advanced","flags":0,"default_value":"15","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"telemetry","can_run":true,"error_string":"","module_options":{"channel_basic":{"name":"channel_basic","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Share basic cluster information (size, version)","long_desc":"","tags":[],"see_also":[]},"channel_crash":{"name":"channel_crash","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Share metadata about Ceph daemon crashes (version, stack straces, etc)","long_desc":"","tags":[],"see_also":[]},"channel_device":{"name":"channel_device","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Share device health metrics (e.g., SMART data, minus potentially identifying info like serial numbers)","long_desc":"","tags":[],"see_also":[]},"channel_ident":{"name":"channel_ident","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Share a user-provided description and/or contact email for the cluster","long_desc":"","tags":[],"see_also":[]},"channel_perf":{"name":"channel_perf","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Share various performance metrics of a cluster","long_desc":"","tags":[],"see_also":[]},"contact":{"name":"contact","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"description":{"name":"description","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"device_url":{"name":"device_url","type":"str","level":"advanced","flags":0,"default_value":"https://telemetry.ceph.com/device","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"enabled":{"name":"enabled","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"interval":{"name":"interval","type":"int","level":"advanced","flags":0,"default_value":"24","min":"8","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"last_opt_revision":{"name":"last_opt_revision","type":"int","level":"advanced","flags":0,"default_value":"1","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"leaderboard":{"name":"leaderboard","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"leaderboard_description":{"name":"leaderboard_description","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"organization":{"name":"organization","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"proxy":{"name":"proxy","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"url":{"name":"url","type":"str","level":"advanced","flags":0,"default_value":"https://telemetry.ceph.com/report","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"test_orchestrator","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"volumes","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"max_concurrent_clones":{"name":"max_concurrent_clones","type":"int","level":"advanced","flags":0,"default_value":"4","min":"","max":"","enum_allowed":[],"desc":"Number of asynchronous cloner threads","long_desc":"","tags":[],"see_also":[]},"periodic_async_work":{"name":"periodic_async_work","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Periodically check for async work","long_desc":"","tags":[],"see_also":[]},"snapshot_clone_delay":{"name":"snapshot_clone_delay","type":"int","level":"advanced","flags":0,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"Delay clone begin operation by snapshot_clone_delay seconds","long_desc":"","tags":[],"see_also":[]},"snapshot_clone_no_wait":{"name":"snapshot_clone_no_wait","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Reject subvolume clone request when cloner threads are busy","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"zabbix","can_run":true,"error_string":"","module_options":{"discovery_interval":{"name":"discovery_interval","type":"uint","level":"advanced","flags":0,"default_value":"100","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"identifier":{"name":"identifier","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"interval":{"name":"interval","type":"secs","level":"advanced","flags":0,"default_value":"60","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"zabbix_host":{"name":"zabbix_host","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"zabbix_port":{"name":"zabbix_port","type":"int","level":"advanced","flags":0,"default_value":"10051","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"zabbix_sender":{"name":"zabbix_sender","type":"str","level":"advanced","flags":0,"default_value":"/usr/bin/zabbix_sender","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}}],"services":{"dashboard":"https://192.168.123.100:8443/"},"always_on_modules":{"octopus":["balancer","crash","devicehealth","orchestrator","pg_autoscaler","progress","rbd_support","status","telemetry","volumes"],"pacific":["balancer","crash","devicehealth","orchestrator","pg_autoscaler","progress","rbd_support","status","telemetry","volumes"],"quincy":["balancer","crash","devicehealth","orchestrator","pg_autoscaler","progress","rbd_support","status","telemetry","volumes"],"reef":["balancer","crash","devicehealth","orchestrator","pg_autoscaler","progress","rbd_support","status","telemetry","volumes"],"squid":["balancer","crash","devicehealth","orchestrator","pg_autoscaler","progress","rbd_support","status","telemetry","volumes"]},"force_disabled_modules":{},"last_failure_osd_epoch":3,"active_clients":[{"name":"devicehealth","addrvec":[{"type":"v2","addr":"192.168.123.100:0","nonce":2951893657}]},{"name":"libcephsqlite","addrvec":[{"type":"v2","addr":"192.168.123.100:0","nonce":730362735}]},{"name":"rbd_support","addrvec":[{"type":"v2","addr":"192.168.123.100:0","nonce":3260522057}]},{"name":"volumes","addrvec":[{"type":"v2","addr":"192.168.123.100:0","nonce":3981086648}]}]} 2026-03-10T13:45:26.686 INFO:tasks.cephadm.ceph_manager.ceph:mgr available! 2026-03-10T13:45:26.686 INFO:tasks.cephadm.ceph_manager.ceph:waiting for all up 2026-03-10T13:45:26.686 DEBUG:teuthology.orchestra.run.vm00:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid c9620084-1c86-11f1-bcc5-e3fb709eab0a -- ceph osd dump --format=json 2026-03-10T13:45:27.587 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:45:27 vm08 bash[23387]: cluster 2026-03-10T13:45:26.218154+0000 mgr.a (mgr.14150) 144 : cluster [DBG] pgmap v100: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:45:27.587 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:45:27 vm08 bash[23387]: cluster 2026-03-10T13:45:26.218154+0000 mgr.a (mgr.14150) 144 : cluster [DBG] pgmap v100: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:45:27.587 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:45:27 vm08 bash[23387]: audit 2026-03-10T13:45:26.634799+0000 mon.a (mon.0) 430 : audit [DBG] from='client.? 192.168.123.100:0/3194634428' entity='client.admin' cmd=[{"prefix": "mgr dump", "format": "json"}]: dispatch 2026-03-10T13:45:27.587 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:45:27 vm08 bash[23387]: audit 2026-03-10T13:45:26.634799+0000 mon.a (mon.0) 430 : audit [DBG] from='client.? 192.168.123.100:0/3194634428' entity='client.admin' cmd=[{"prefix": "mgr dump", "format": "json"}]: dispatch 2026-03-10T13:45:27.716 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:45:27 vm00 bash[20748]: cluster 2026-03-10T13:45:26.218154+0000 mgr.a (mgr.14150) 144 : cluster [DBG] pgmap v100: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:45:27.716 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:45:27 vm00 bash[20748]: cluster 2026-03-10T13:45:26.218154+0000 mgr.a (mgr.14150) 144 : cluster [DBG] pgmap v100: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:45:27.716 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:45:27 vm00 bash[20748]: audit 2026-03-10T13:45:26.634799+0000 mon.a (mon.0) 430 : audit [DBG] from='client.? 192.168.123.100:0/3194634428' entity='client.admin' cmd=[{"prefix": "mgr dump", "format": "json"}]: dispatch 2026-03-10T13:45:27.716 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:45:27 vm00 bash[20748]: audit 2026-03-10T13:45:26.634799+0000 mon.a (mon.0) 430 : audit [DBG] from='client.? 192.168.123.100:0/3194634428' entity='client.admin' cmd=[{"prefix": "mgr dump", "format": "json"}]: dispatch 2026-03-10T13:45:27.749 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:45:27 vm07 bash[23044]: cluster 2026-03-10T13:45:26.218154+0000 mgr.a (mgr.14150) 144 : cluster [DBG] pgmap v100: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:45:27.749 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:45:27 vm07 bash[23044]: cluster 2026-03-10T13:45:26.218154+0000 mgr.a (mgr.14150) 144 : cluster [DBG] pgmap v100: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:45:27.749 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:45:27 vm07 bash[23044]: audit 2026-03-10T13:45:26.634799+0000 mon.a (mon.0) 430 : audit [DBG] from='client.? 192.168.123.100:0/3194634428' entity='client.admin' cmd=[{"prefix": "mgr dump", "format": "json"}]: dispatch 2026-03-10T13:45:27.749 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:45:27 vm07 bash[23044]: audit 2026-03-10T13:45:26.634799+0000 mon.a (mon.0) 430 : audit [DBG] from='client.? 192.168.123.100:0/3194634428' entity='client.admin' cmd=[{"prefix": "mgr dump", "format": "json"}]: dispatch 2026-03-10T13:45:29.587 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:45:29 vm08 bash[23387]: cluster 2026-03-10T13:45:28.218397+0000 mgr.a (mgr.14150) 145 : cluster [DBG] pgmap v101: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:45:29.587 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:45:29 vm08 bash[23387]: cluster 2026-03-10T13:45:28.218397+0000 mgr.a (mgr.14150) 145 : cluster [DBG] pgmap v101: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:45:29.716 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:45:29 vm00 bash[20748]: cluster 2026-03-10T13:45:28.218397+0000 mgr.a (mgr.14150) 145 : cluster [DBG] pgmap v101: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:45:29.716 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:45:29 vm00 bash[20748]: cluster 2026-03-10T13:45:28.218397+0000 mgr.a (mgr.14150) 145 : cluster [DBG] pgmap v101: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:45:29.749 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:45:29 vm07 bash[23044]: cluster 2026-03-10T13:45:28.218397+0000 mgr.a (mgr.14150) 145 : cluster [DBG] pgmap v101: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:45:29.749 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:45:29 vm07 bash[23044]: cluster 2026-03-10T13:45:28.218397+0000 mgr.a (mgr.14150) 145 : cluster [DBG] pgmap v101: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:45:30.381 INFO:teuthology.orchestra.run.vm00.stderr:Inferring config /var/lib/ceph/c9620084-1c86-11f1-bcc5-e3fb709eab0a/mon.a/config 2026-03-10T13:45:30.618 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-10T13:45:30.618 INFO:teuthology.orchestra.run.vm00.stdout:{"epoch":22,"fsid":"c9620084-1c86-11f1-bcc5-e3fb709eab0a","created":"2026-03-10T13:42:08.642186+0000","modified":"2026-03-10T13:45:12.992231+0000","last_up_change":"2026-03-10T13:45:08.757720+0000","last_in_change":"2026-03-10T13:44:52.873153+0000","flags":"sortbitwise,recovery_deletes,purged_snapdirs,pglog_hardlimit","flags_num":5799936,"flags_set":["pglog_hardlimit","purged_snapdirs","recovery_deletes","sortbitwise"],"crush_version":8,"full_ratio":0.94999998807907104,"backfillfull_ratio":0.89999997615814209,"nearfull_ratio":0.85000002384185791,"cluster_snapshot":"","pool_max":1,"max_osd":3,"require_min_compat_client":"luminous","min_compat_client":"jewel","require_osd_release":"squid","allow_crimson":false,"pools":[{"pool":1,"pool_name":".mgr","create_time":"2026-03-10T13:45:10.246821+0000","flags":1,"flags_names":"hashpspool","type":1,"size":3,"min_size":2,"crush_rule":0,"peering_crush_bucket_count":0,"peering_crush_bucket_target":0,"peering_crush_bucket_barrier":0,"peering_crush_bucket_mandatory_member":2147483647,"is_stretch_pool":false,"object_hash":2,"pg_autoscale_mode":"off","pg_num":1,"pg_placement_num":1,"pg_placement_num_target":1,"pg_num_target":1,"pg_num_pending":1,"last_pg_merge_meta":{"source_pgid":"0.0","ready_epoch":0,"last_epoch_started":0,"last_epoch_clean":0,"source_version":"0'0","target_version":"0'0"},"last_change":"22","last_force_op_resend":"0","last_force_op_resend_prenautilus":"0","last_force_op_resend_preluminous":"0","auid":0,"snap_mode":"selfmanaged","snap_seq":0,"snap_epoch":0,"pool_snaps":[],"removed_snaps":"[]","quota_max_bytes":0,"quota_max_objects":0,"tiers":[],"tier_of":-1,"read_tier":-1,"write_tier":-1,"cache_mode":"none","target_max_bytes":0,"target_max_objects":0,"cache_target_dirty_ratio_micro":400000,"cache_target_dirty_high_ratio_micro":600000,"cache_target_full_ratio_micro":800000,"cache_min_flush_age":0,"cache_min_evict_age":0,"erasure_code_profile":"","hit_set_params":{"type":"none"},"hit_set_period":0,"hit_set_count":0,"use_gmt_hitset":true,"min_read_recency_for_promote":0,"min_write_recency_for_promote":0,"hit_set_grade_decay_rate":0,"hit_set_search_last_n":0,"grade_table":[],"stripe_width":0,"expected_num_objects":0,"fast_read":false,"options":{"pg_num_max":32,"pg_num_min":1},"application_metadata":{"mgr":{}},"read_balance":{"score_type":"Fair distribution","score_acting":3,"score_stable":3,"optimal_score":1,"raw_score_acting":3,"raw_score_stable":3,"primary_affinity_weighted":1,"average_primary_affinity":1,"average_primary_affinity_weighted":1}}],"osds":[{"osd":0,"uuid":"d6acd3f9-435e-414f-ba14-3aa55444aaaf","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":8,"up_thru":0,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.100:6802","nonce":430820835},{"type":"v1","addr":"192.168.123.100:6803","nonce":430820835}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.100:6804","nonce":430820835},{"type":"v1","addr":"192.168.123.100:6805","nonce":430820835}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.100:6808","nonce":430820835},{"type":"v1","addr":"192.168.123.100:6809","nonce":430820835}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.100:6806","nonce":430820835},{"type":"v1","addr":"192.168.123.100:6807","nonce":430820835}]},"public_addr":"192.168.123.100:6803/430820835","cluster_addr":"192.168.123.100:6805/430820835","heartbeat_back_addr":"192.168.123.100:6809/430820835","heartbeat_front_addr":"192.168.123.100:6807/430820835","state":["exists","up"]},{"osd":1,"uuid":"62e51a83-b44b-465f-8f6e-e14cd4837af5","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":13,"up_thru":20,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.107:6800","nonce":2145894062},{"type":"v1","addr":"192.168.123.107:6801","nonce":2145894062}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.107:6802","nonce":2145894062},{"type":"v1","addr":"192.168.123.107:6803","nonce":2145894062}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.107:6806","nonce":2145894062},{"type":"v1","addr":"192.168.123.107:6807","nonce":2145894062}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.107:6804","nonce":2145894062},{"type":"v1","addr":"192.168.123.107:6805","nonce":2145894062}]},"public_addr":"192.168.123.107:6801/2145894062","cluster_addr":"192.168.123.107:6803/2145894062","heartbeat_back_addr":"192.168.123.107:6807/2145894062","heartbeat_front_addr":"192.168.123.107:6805/2145894062","state":["exists","up"]},{"osd":2,"uuid":"84b6e04b-cad7-4941-bbb1-4ca53f9ed622","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":18,"up_thru":0,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.108:6800","nonce":941417901},{"type":"v1","addr":"192.168.123.108:6801","nonce":941417901}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.108:6802","nonce":941417901},{"type":"v1","addr":"192.168.123.108:6803","nonce":941417901}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.108:6806","nonce":941417901},{"type":"v1","addr":"192.168.123.108:6807","nonce":941417901}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.108:6804","nonce":941417901},{"type":"v1","addr":"192.168.123.108:6805","nonce":941417901}]},"public_addr":"192.168.123.108:6801/941417901","cluster_addr":"192.168.123.108:6803/941417901","heartbeat_back_addr":"192.168.123.108:6807/941417901","heartbeat_front_addr":"192.168.123.108:6805/941417901","state":["exists","up"]}],"osd_xinfo":[{"osd":0,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540701547738038271,"old_weight":0,"last_purged_snaps_scrub":"2026-03-10T13:44:02.624281+0000","dead_epoch":0},{"osd":1,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540701547738038271,"old_weight":0,"last_purged_snaps_scrub":"2026-03-10T13:44:36.117101+0000","dead_epoch":0},{"osd":2,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540701547738038271,"old_weight":0,"last_purged_snaps_scrub":"2026-03-10T13:45:06.989224+0000","dead_epoch":0}],"pg_upmap":[],"pg_upmap_items":[],"pg_upmap_primaries":[],"pg_temp":[],"primary_temp":[],"blocklist":{"192.168.123.100:0/594940130":"2026-03-11T13:42:30.190724+0000","192.168.123.100:0/3876959113":"2026-03-11T13:42:30.190724+0000","192.168.123.100:6801/608270361":"2026-03-11T13:42:30.190724+0000","192.168.123.100:6800/608270361":"2026-03-11T13:42:30.190724+0000","192.168.123.100:0/2625574641":"2026-03-11T13:42:20.129969+0000","192.168.123.100:0/2879841732":"2026-03-11T13:42:20.129969+0000","192.168.123.100:6801/3205452410":"2026-03-11T13:42:20.129969+0000","192.168.123.100:0/1312129672":"2026-03-11T13:42:30.190724+0000","192.168.123.100:6800/3205452410":"2026-03-11T13:42:20.129969+0000","192.168.123.100:0/1053246253":"2026-03-11T13:42:20.129969+0000"},"range_blocklist":{},"erasure_code_profiles":{"default":{"crush-failure-domain":"osd","k":"2","m":"1","plugin":"jerasure","technique":"reed_sol_van"}},"removed_snaps_queue":[],"new_removed_snaps":[],"new_purged_snaps":[],"crush_node_flags":{},"device_class_flags":{},"stretch_mode":{"stretch_mode_enabled":false,"stretch_bucket_count":0,"degraded_stretch_mode":0,"recovering_stretch_mode":0,"stretch_mode_bucket":0}} 2026-03-10T13:45:30.663 INFO:tasks.cephadm.ceph_manager.ceph:all up! 2026-03-10T13:45:30.663 DEBUG:teuthology.orchestra.run.vm00:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid c9620084-1c86-11f1-bcc5-e3fb709eab0a -- ceph osd dump --format=json 2026-03-10T13:45:31.587 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:45:31 vm08 bash[23387]: cluster 2026-03-10T13:45:30.218590+0000 mgr.a (mgr.14150) 146 : cluster [DBG] pgmap v102: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:45:31.587 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:45:31 vm08 bash[23387]: cluster 2026-03-10T13:45:30.218590+0000 mgr.a (mgr.14150) 146 : cluster [DBG] pgmap v102: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:45:31.587 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:45:31 vm08 bash[23387]: audit 2026-03-10T13:45:30.619208+0000 mon.a (mon.0) 431 : audit [DBG] from='client.? 192.168.123.100:0/3896829115' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-10T13:45:31.587 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:45:31 vm08 bash[23387]: audit 2026-03-10T13:45:30.619208+0000 mon.a (mon.0) 431 : audit [DBG] from='client.? 192.168.123.100:0/3896829115' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-10T13:45:31.716 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:45:31 vm00 bash[20748]: cluster 2026-03-10T13:45:30.218590+0000 mgr.a (mgr.14150) 146 : cluster [DBG] pgmap v102: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:45:31.716 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:45:31 vm00 bash[20748]: cluster 2026-03-10T13:45:30.218590+0000 mgr.a (mgr.14150) 146 : cluster [DBG] pgmap v102: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:45:31.716 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:45:31 vm00 bash[20748]: audit 2026-03-10T13:45:30.619208+0000 mon.a (mon.0) 431 : audit [DBG] from='client.? 192.168.123.100:0/3896829115' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-10T13:45:31.716 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:45:31 vm00 bash[20748]: audit 2026-03-10T13:45:30.619208+0000 mon.a (mon.0) 431 : audit [DBG] from='client.? 192.168.123.100:0/3896829115' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-10T13:45:31.749 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:45:31 vm07 bash[23044]: cluster 2026-03-10T13:45:30.218590+0000 mgr.a (mgr.14150) 146 : cluster [DBG] pgmap v102: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:45:31.749 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:45:31 vm07 bash[23044]: cluster 2026-03-10T13:45:30.218590+0000 mgr.a (mgr.14150) 146 : cluster [DBG] pgmap v102: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:45:31.749 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:45:31 vm07 bash[23044]: audit 2026-03-10T13:45:30.619208+0000 mon.a (mon.0) 431 : audit [DBG] from='client.? 192.168.123.100:0/3896829115' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-10T13:45:31.749 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:45:31 vm07 bash[23044]: audit 2026-03-10T13:45:30.619208+0000 mon.a (mon.0) 431 : audit [DBG] from='client.? 192.168.123.100:0/3896829115' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-10T13:45:33.587 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:45:33 vm08 bash[23387]: cluster 2026-03-10T13:45:32.218820+0000 mgr.a (mgr.14150) 147 : cluster [DBG] pgmap v103: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:45:33.587 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:45:33 vm08 bash[23387]: cluster 2026-03-10T13:45:32.218820+0000 mgr.a (mgr.14150) 147 : cluster [DBG] pgmap v103: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:45:33.716 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:45:33 vm00 bash[20748]: cluster 2026-03-10T13:45:32.218820+0000 mgr.a (mgr.14150) 147 : cluster [DBG] pgmap v103: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:45:33.716 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:45:33 vm00 bash[20748]: cluster 2026-03-10T13:45:32.218820+0000 mgr.a (mgr.14150) 147 : cluster [DBG] pgmap v103: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:45:33.749 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:45:33 vm07 bash[23044]: cluster 2026-03-10T13:45:32.218820+0000 mgr.a (mgr.14150) 147 : cluster [DBG] pgmap v103: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:45:33.749 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:45:33 vm07 bash[23044]: cluster 2026-03-10T13:45:32.218820+0000 mgr.a (mgr.14150) 147 : cluster [DBG] pgmap v103: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:45:34.391 INFO:teuthology.orchestra.run.vm00.stderr:Inferring config /var/lib/ceph/c9620084-1c86-11f1-bcc5-e3fb709eab0a/mon.a/config 2026-03-10T13:45:34.619 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-10T13:45:34.619 INFO:teuthology.orchestra.run.vm00.stdout:{"epoch":22,"fsid":"c9620084-1c86-11f1-bcc5-e3fb709eab0a","created":"2026-03-10T13:42:08.642186+0000","modified":"2026-03-10T13:45:12.992231+0000","last_up_change":"2026-03-10T13:45:08.757720+0000","last_in_change":"2026-03-10T13:44:52.873153+0000","flags":"sortbitwise,recovery_deletes,purged_snapdirs,pglog_hardlimit","flags_num":5799936,"flags_set":["pglog_hardlimit","purged_snapdirs","recovery_deletes","sortbitwise"],"crush_version":8,"full_ratio":0.94999998807907104,"backfillfull_ratio":0.89999997615814209,"nearfull_ratio":0.85000002384185791,"cluster_snapshot":"","pool_max":1,"max_osd":3,"require_min_compat_client":"luminous","min_compat_client":"jewel","require_osd_release":"squid","allow_crimson":false,"pools":[{"pool":1,"pool_name":".mgr","create_time":"2026-03-10T13:45:10.246821+0000","flags":1,"flags_names":"hashpspool","type":1,"size":3,"min_size":2,"crush_rule":0,"peering_crush_bucket_count":0,"peering_crush_bucket_target":0,"peering_crush_bucket_barrier":0,"peering_crush_bucket_mandatory_member":2147483647,"is_stretch_pool":false,"object_hash":2,"pg_autoscale_mode":"off","pg_num":1,"pg_placement_num":1,"pg_placement_num_target":1,"pg_num_target":1,"pg_num_pending":1,"last_pg_merge_meta":{"source_pgid":"0.0","ready_epoch":0,"last_epoch_started":0,"last_epoch_clean":0,"source_version":"0'0","target_version":"0'0"},"last_change":"22","last_force_op_resend":"0","last_force_op_resend_prenautilus":"0","last_force_op_resend_preluminous":"0","auid":0,"snap_mode":"selfmanaged","snap_seq":0,"snap_epoch":0,"pool_snaps":[],"removed_snaps":"[]","quota_max_bytes":0,"quota_max_objects":0,"tiers":[],"tier_of":-1,"read_tier":-1,"write_tier":-1,"cache_mode":"none","target_max_bytes":0,"target_max_objects":0,"cache_target_dirty_ratio_micro":400000,"cache_target_dirty_high_ratio_micro":600000,"cache_target_full_ratio_micro":800000,"cache_min_flush_age":0,"cache_min_evict_age":0,"erasure_code_profile":"","hit_set_params":{"type":"none"},"hit_set_period":0,"hit_set_count":0,"use_gmt_hitset":true,"min_read_recency_for_promote":0,"min_write_recency_for_promote":0,"hit_set_grade_decay_rate":0,"hit_set_search_last_n":0,"grade_table":[],"stripe_width":0,"expected_num_objects":0,"fast_read":false,"options":{"pg_num_max":32,"pg_num_min":1},"application_metadata":{"mgr":{}},"read_balance":{"score_type":"Fair distribution","score_acting":3,"score_stable":3,"optimal_score":1,"raw_score_acting":3,"raw_score_stable":3,"primary_affinity_weighted":1,"average_primary_affinity":1,"average_primary_affinity_weighted":1}}],"osds":[{"osd":0,"uuid":"d6acd3f9-435e-414f-ba14-3aa55444aaaf","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":8,"up_thru":0,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.100:6802","nonce":430820835},{"type":"v1","addr":"192.168.123.100:6803","nonce":430820835}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.100:6804","nonce":430820835},{"type":"v1","addr":"192.168.123.100:6805","nonce":430820835}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.100:6808","nonce":430820835},{"type":"v1","addr":"192.168.123.100:6809","nonce":430820835}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.100:6806","nonce":430820835},{"type":"v1","addr":"192.168.123.100:6807","nonce":430820835}]},"public_addr":"192.168.123.100:6803/430820835","cluster_addr":"192.168.123.100:6805/430820835","heartbeat_back_addr":"192.168.123.100:6809/430820835","heartbeat_front_addr":"192.168.123.100:6807/430820835","state":["exists","up"]},{"osd":1,"uuid":"62e51a83-b44b-465f-8f6e-e14cd4837af5","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":13,"up_thru":20,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.107:6800","nonce":2145894062},{"type":"v1","addr":"192.168.123.107:6801","nonce":2145894062}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.107:6802","nonce":2145894062},{"type":"v1","addr":"192.168.123.107:6803","nonce":2145894062}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.107:6806","nonce":2145894062},{"type":"v1","addr":"192.168.123.107:6807","nonce":2145894062}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.107:6804","nonce":2145894062},{"type":"v1","addr":"192.168.123.107:6805","nonce":2145894062}]},"public_addr":"192.168.123.107:6801/2145894062","cluster_addr":"192.168.123.107:6803/2145894062","heartbeat_back_addr":"192.168.123.107:6807/2145894062","heartbeat_front_addr":"192.168.123.107:6805/2145894062","state":["exists","up"]},{"osd":2,"uuid":"84b6e04b-cad7-4941-bbb1-4ca53f9ed622","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":18,"up_thru":0,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.108:6800","nonce":941417901},{"type":"v1","addr":"192.168.123.108:6801","nonce":941417901}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.108:6802","nonce":941417901},{"type":"v1","addr":"192.168.123.108:6803","nonce":941417901}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.108:6806","nonce":941417901},{"type":"v1","addr":"192.168.123.108:6807","nonce":941417901}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.108:6804","nonce":941417901},{"type":"v1","addr":"192.168.123.108:6805","nonce":941417901}]},"public_addr":"192.168.123.108:6801/941417901","cluster_addr":"192.168.123.108:6803/941417901","heartbeat_back_addr":"192.168.123.108:6807/941417901","heartbeat_front_addr":"192.168.123.108:6805/941417901","state":["exists","up"]}],"osd_xinfo":[{"osd":0,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540701547738038271,"old_weight":0,"last_purged_snaps_scrub":"2026-03-10T13:44:02.624281+0000","dead_epoch":0},{"osd":1,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540701547738038271,"old_weight":0,"last_purged_snaps_scrub":"2026-03-10T13:44:36.117101+0000","dead_epoch":0},{"osd":2,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540701547738038271,"old_weight":0,"last_purged_snaps_scrub":"2026-03-10T13:45:06.989224+0000","dead_epoch":0}],"pg_upmap":[],"pg_upmap_items":[],"pg_upmap_primaries":[],"pg_temp":[],"primary_temp":[],"blocklist":{"192.168.123.100:0/594940130":"2026-03-11T13:42:30.190724+0000","192.168.123.100:0/3876959113":"2026-03-11T13:42:30.190724+0000","192.168.123.100:6801/608270361":"2026-03-11T13:42:30.190724+0000","192.168.123.100:6800/608270361":"2026-03-11T13:42:30.190724+0000","192.168.123.100:0/2625574641":"2026-03-11T13:42:20.129969+0000","192.168.123.100:0/2879841732":"2026-03-11T13:42:20.129969+0000","192.168.123.100:6801/3205452410":"2026-03-11T13:42:20.129969+0000","192.168.123.100:0/1312129672":"2026-03-11T13:42:30.190724+0000","192.168.123.100:6800/3205452410":"2026-03-11T13:42:20.129969+0000","192.168.123.100:0/1053246253":"2026-03-11T13:42:20.129969+0000"},"range_blocklist":{},"erasure_code_profiles":{"default":{"crush-failure-domain":"osd","k":"2","m":"1","plugin":"jerasure","technique":"reed_sol_van"}},"removed_snaps_queue":[],"new_removed_snaps":[],"new_purged_snaps":[],"crush_node_flags":{},"device_class_flags":{},"stretch_mode":{"stretch_mode_enabled":false,"stretch_bucket_count":0,"degraded_stretch_mode":0,"recovering_stretch_mode":0,"stretch_mode_bucket":0}} 2026-03-10T13:45:34.663 DEBUG:teuthology.orchestra.run.vm00:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid c9620084-1c86-11f1-bcc5-e3fb709eab0a -- ceph tell osd.0 flush_pg_stats 2026-03-10T13:45:34.664 DEBUG:teuthology.orchestra.run.vm00:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid c9620084-1c86-11f1-bcc5-e3fb709eab0a -- ceph tell osd.1 flush_pg_stats 2026-03-10T13:45:34.664 DEBUG:teuthology.orchestra.run.vm00:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid c9620084-1c86-11f1-bcc5-e3fb709eab0a -- ceph tell osd.2 flush_pg_stats 2026-03-10T13:45:35.587 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:45:35 vm08 bash[23387]: cluster 2026-03-10T13:45:34.219074+0000 mgr.a (mgr.14150) 148 : cluster [DBG] pgmap v104: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:45:35.587 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:45:35 vm08 bash[23387]: cluster 2026-03-10T13:45:34.219074+0000 mgr.a (mgr.14150) 148 : cluster [DBG] pgmap v104: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:45:35.587 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:45:35 vm08 bash[23387]: audit 2026-03-10T13:45:34.620066+0000 mon.a (mon.0) 432 : audit [DBG] from='client.? 192.168.123.100:0/2909949706' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-10T13:45:35.587 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:45:35 vm08 bash[23387]: audit 2026-03-10T13:45:34.620066+0000 mon.a (mon.0) 432 : audit [DBG] from='client.? 192.168.123.100:0/2909949706' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-10T13:45:35.716 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:45:35 vm00 bash[20748]: cluster 2026-03-10T13:45:34.219074+0000 mgr.a (mgr.14150) 148 : cluster [DBG] pgmap v104: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:45:35.716 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:45:35 vm00 bash[20748]: cluster 2026-03-10T13:45:34.219074+0000 mgr.a (mgr.14150) 148 : cluster [DBG] pgmap v104: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:45:35.716 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:45:35 vm00 bash[20748]: audit 2026-03-10T13:45:34.620066+0000 mon.a (mon.0) 432 : audit [DBG] from='client.? 192.168.123.100:0/2909949706' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-10T13:45:35.717 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:45:35 vm00 bash[20748]: audit 2026-03-10T13:45:34.620066+0000 mon.a (mon.0) 432 : audit [DBG] from='client.? 192.168.123.100:0/2909949706' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-10T13:45:35.749 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:45:35 vm07 bash[23044]: cluster 2026-03-10T13:45:34.219074+0000 mgr.a (mgr.14150) 148 : cluster [DBG] pgmap v104: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:45:35.749 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:45:35 vm07 bash[23044]: cluster 2026-03-10T13:45:34.219074+0000 mgr.a (mgr.14150) 148 : cluster [DBG] pgmap v104: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:45:35.749 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:45:35 vm07 bash[23044]: audit 2026-03-10T13:45:34.620066+0000 mon.a (mon.0) 432 : audit [DBG] from='client.? 192.168.123.100:0/2909949706' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-10T13:45:35.749 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:45:35 vm07 bash[23044]: audit 2026-03-10T13:45:34.620066+0000 mon.a (mon.0) 432 : audit [DBG] from='client.? 192.168.123.100:0/2909949706' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-10T13:45:37.587 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:45:37 vm08 bash[23387]: cluster 2026-03-10T13:45:36.219296+0000 mgr.a (mgr.14150) 149 : cluster [DBG] pgmap v105: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:45:37.587 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:45:37 vm08 bash[23387]: cluster 2026-03-10T13:45:36.219296+0000 mgr.a (mgr.14150) 149 : cluster [DBG] pgmap v105: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:45:37.716 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:45:37 vm00 bash[20748]: cluster 2026-03-10T13:45:36.219296+0000 mgr.a (mgr.14150) 149 : cluster [DBG] pgmap v105: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:45:37.716 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:45:37 vm00 bash[20748]: cluster 2026-03-10T13:45:36.219296+0000 mgr.a (mgr.14150) 149 : cluster [DBG] pgmap v105: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:45:37.749 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:45:37 vm07 bash[23044]: cluster 2026-03-10T13:45:36.219296+0000 mgr.a (mgr.14150) 149 : cluster [DBG] pgmap v105: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:45:37.749 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:45:37 vm07 bash[23044]: cluster 2026-03-10T13:45:36.219296+0000 mgr.a (mgr.14150) 149 : cluster [DBG] pgmap v105: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:45:38.403 INFO:teuthology.orchestra.run.vm00.stderr:Inferring config /var/lib/ceph/c9620084-1c86-11f1-bcc5-e3fb709eab0a/mon.a/config 2026-03-10T13:45:38.403 INFO:teuthology.orchestra.run.vm00.stderr:Inferring config /var/lib/ceph/c9620084-1c86-11f1-bcc5-e3fb709eab0a/mon.a/config 2026-03-10T13:45:38.404 INFO:teuthology.orchestra.run.vm00.stderr:Inferring config /var/lib/ceph/c9620084-1c86-11f1-bcc5-e3fb709eab0a/mon.a/config 2026-03-10T13:45:38.674 INFO:teuthology.orchestra.run.vm00.stdout:77309411336 2026-03-10T13:45:38.674 DEBUG:teuthology.orchestra.run.vm00:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid c9620084-1c86-11f1-bcc5-e3fb709eab0a -- ceph osd last-stat-seq osd.2 2026-03-10T13:45:38.723 INFO:teuthology.orchestra.run.vm00.stdout:55834574861 2026-03-10T13:45:38.723 DEBUG:teuthology.orchestra.run.vm00:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid c9620084-1c86-11f1-bcc5-e3fb709eab0a -- ceph osd last-stat-seq osd.1 2026-03-10T13:45:38.753 INFO:teuthology.orchestra.run.vm00.stdout:34359738388 2026-03-10T13:45:38.753 DEBUG:teuthology.orchestra.run.vm00:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid c9620084-1c86-11f1-bcc5-e3fb709eab0a -- ceph osd last-stat-seq osd.0 2026-03-10T13:45:39.587 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:45:39 vm08 bash[23387]: cluster 2026-03-10T13:45:38.219583+0000 mgr.a (mgr.14150) 150 : cluster [DBG] pgmap v106: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:45:39.587 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:45:39 vm08 bash[23387]: cluster 2026-03-10T13:45:38.219583+0000 mgr.a (mgr.14150) 150 : cluster [DBG] pgmap v106: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:45:39.716 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:45:39 vm00 bash[20748]: cluster 2026-03-10T13:45:38.219583+0000 mgr.a (mgr.14150) 150 : cluster [DBG] pgmap v106: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:45:39.716 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:45:39 vm00 bash[20748]: cluster 2026-03-10T13:45:38.219583+0000 mgr.a (mgr.14150) 150 : cluster [DBG] pgmap v106: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:45:39.749 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:45:39 vm07 bash[23044]: cluster 2026-03-10T13:45:38.219583+0000 mgr.a (mgr.14150) 150 : cluster [DBG] pgmap v106: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:45:39.749 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:45:39 vm07 bash[23044]: cluster 2026-03-10T13:45:38.219583+0000 mgr.a (mgr.14150) 150 : cluster [DBG] pgmap v106: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:45:41.587 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:45:41 vm08 bash[23387]: cluster 2026-03-10T13:45:40.219824+0000 mgr.a (mgr.14150) 151 : cluster [DBG] pgmap v107: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:45:41.587 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:45:41 vm08 bash[23387]: cluster 2026-03-10T13:45:40.219824+0000 mgr.a (mgr.14150) 151 : cluster [DBG] pgmap v107: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:45:41.716 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:45:41 vm00 bash[20748]: cluster 2026-03-10T13:45:40.219824+0000 mgr.a (mgr.14150) 151 : cluster [DBG] pgmap v107: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:45:41.716 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:45:41 vm00 bash[20748]: cluster 2026-03-10T13:45:40.219824+0000 mgr.a (mgr.14150) 151 : cluster [DBG] pgmap v107: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:45:41.749 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:45:41 vm07 bash[23044]: cluster 2026-03-10T13:45:40.219824+0000 mgr.a (mgr.14150) 151 : cluster [DBG] pgmap v107: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:45:41.749 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:45:41 vm07 bash[23044]: cluster 2026-03-10T13:45:40.219824+0000 mgr.a (mgr.14150) 151 : cluster [DBG] pgmap v107: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:45:42.413 INFO:teuthology.orchestra.run.vm00.stderr:Inferring config /var/lib/ceph/c9620084-1c86-11f1-bcc5-e3fb709eab0a/mon.a/config 2026-03-10T13:45:42.414 INFO:teuthology.orchestra.run.vm00.stderr:Inferring config /var/lib/ceph/c9620084-1c86-11f1-bcc5-e3fb709eab0a/mon.a/config 2026-03-10T13:45:42.674 INFO:teuthology.orchestra.run.vm00.stdout:77309411337 2026-03-10T13:45:42.676 INFO:teuthology.orchestra.run.vm00.stdout:55834574862 2026-03-10T13:45:42.731 INFO:tasks.cephadm.ceph_manager.ceph:need seq 77309411336 got 77309411337 for osd.2 2026-03-10T13:45:42.731 DEBUG:teuthology.parallel:result is None 2026-03-10T13:45:42.739 INFO:tasks.cephadm.ceph_manager.ceph:need seq 55834574861 got 55834574862 for osd.1 2026-03-10T13:45:42.739 DEBUG:teuthology.parallel:result is None 2026-03-10T13:45:43.415 INFO:teuthology.orchestra.run.vm00.stderr:Inferring config /var/lib/ceph/c9620084-1c86-11f1-bcc5-e3fb709eab0a/mon.a/config 2026-03-10T13:45:43.587 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:45:43 vm08 bash[23387]: cluster 2026-03-10T13:45:42.220108+0000 mgr.a (mgr.14150) 152 : cluster [DBG] pgmap v108: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:45:43.587 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:45:43 vm08 bash[23387]: cluster 2026-03-10T13:45:42.220108+0000 mgr.a (mgr.14150) 152 : cluster [DBG] pgmap v108: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:45:43.587 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:45:43 vm08 bash[23387]: audit 2026-03-10T13:45:42.675109+0000 mon.a (mon.0) 433 : audit [DBG] from='client.? 192.168.123.100:0/697188099' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 2}]: dispatch 2026-03-10T13:45:43.587 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:45:43 vm08 bash[23387]: audit 2026-03-10T13:45:42.675109+0000 mon.a (mon.0) 433 : audit [DBG] from='client.? 192.168.123.100:0/697188099' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 2}]: dispatch 2026-03-10T13:45:43.587 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:45:43 vm08 bash[23387]: audit 2026-03-10T13:45:42.677786+0000 mon.a (mon.0) 434 : audit [DBG] from='client.? 192.168.123.100:0/177308099' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 1}]: dispatch 2026-03-10T13:45:43.587 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:45:43 vm08 bash[23387]: audit 2026-03-10T13:45:42.677786+0000 mon.a (mon.0) 434 : audit [DBG] from='client.? 192.168.123.100:0/177308099' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 1}]: dispatch 2026-03-10T13:45:43.666 INFO:teuthology.orchestra.run.vm00.stdout:34359738389 2026-03-10T13:45:43.676 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:45:43 vm00 bash[20748]: cluster 2026-03-10T13:45:42.220108+0000 mgr.a (mgr.14150) 152 : cluster [DBG] pgmap v108: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:45:43.677 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:45:43 vm00 bash[20748]: cluster 2026-03-10T13:45:42.220108+0000 mgr.a (mgr.14150) 152 : cluster [DBG] pgmap v108: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:45:43.677 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:45:43 vm00 bash[20748]: audit 2026-03-10T13:45:42.675109+0000 mon.a (mon.0) 433 : audit [DBG] from='client.? 192.168.123.100:0/697188099' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 2}]: dispatch 2026-03-10T13:45:43.677 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:45:43 vm00 bash[20748]: audit 2026-03-10T13:45:42.675109+0000 mon.a (mon.0) 433 : audit [DBG] from='client.? 192.168.123.100:0/697188099' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 2}]: dispatch 2026-03-10T13:45:43.677 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:45:43 vm00 bash[20748]: audit 2026-03-10T13:45:42.677786+0000 mon.a (mon.0) 434 : audit [DBG] from='client.? 192.168.123.100:0/177308099' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 1}]: dispatch 2026-03-10T13:45:43.677 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:45:43 vm00 bash[20748]: audit 2026-03-10T13:45:42.677786+0000 mon.a (mon.0) 434 : audit [DBG] from='client.? 192.168.123.100:0/177308099' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 1}]: dispatch 2026-03-10T13:45:43.713 INFO:tasks.cephadm.ceph_manager.ceph:need seq 34359738388 got 34359738389 for osd.0 2026-03-10T13:45:43.713 DEBUG:teuthology.parallel:result is None 2026-03-10T13:45:43.713 INFO:tasks.cephadm.ceph_manager.ceph:waiting for clean 2026-03-10T13:45:43.713 DEBUG:teuthology.orchestra.run.vm00:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid c9620084-1c86-11f1-bcc5-e3fb709eab0a -- ceph pg dump --format=json 2026-03-10T13:45:43.749 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:45:43 vm07 bash[23044]: cluster 2026-03-10T13:45:42.220108+0000 mgr.a (mgr.14150) 152 : cluster [DBG] pgmap v108: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:45:43.749 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:45:43 vm07 bash[23044]: cluster 2026-03-10T13:45:42.220108+0000 mgr.a (mgr.14150) 152 : cluster [DBG] pgmap v108: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:45:43.749 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:45:43 vm07 bash[23044]: audit 2026-03-10T13:45:42.675109+0000 mon.a (mon.0) 433 : audit [DBG] from='client.? 192.168.123.100:0/697188099' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 2}]: dispatch 2026-03-10T13:45:43.749 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:45:43 vm07 bash[23044]: audit 2026-03-10T13:45:42.675109+0000 mon.a (mon.0) 433 : audit [DBG] from='client.? 192.168.123.100:0/697188099' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 2}]: dispatch 2026-03-10T13:45:43.749 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:45:43 vm07 bash[23044]: audit 2026-03-10T13:45:42.677786+0000 mon.a (mon.0) 434 : audit [DBG] from='client.? 192.168.123.100:0/177308099' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 1}]: dispatch 2026-03-10T13:45:43.749 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:45:43 vm07 bash[23044]: audit 2026-03-10T13:45:42.677786+0000 mon.a (mon.0) 434 : audit [DBG] from='client.? 192.168.123.100:0/177308099' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 1}]: dispatch 2026-03-10T13:45:44.587 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:45:44 vm08 bash[23387]: audit 2026-03-10T13:45:43.667127+0000 mon.c (mon.1) 9 : audit [DBG] from='client.? 192.168.123.100:0/929911573' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 0}]: dispatch 2026-03-10T13:45:44.587 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:45:44 vm08 bash[23387]: audit 2026-03-10T13:45:43.667127+0000 mon.c (mon.1) 9 : audit [DBG] from='client.? 192.168.123.100:0/929911573' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 0}]: dispatch 2026-03-10T13:45:44.716 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:45:44 vm00 bash[20748]: audit 2026-03-10T13:45:43.667127+0000 mon.c (mon.1) 9 : audit [DBG] from='client.? 192.168.123.100:0/929911573' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 0}]: dispatch 2026-03-10T13:45:44.716 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:45:44 vm00 bash[20748]: audit 2026-03-10T13:45:43.667127+0000 mon.c (mon.1) 9 : audit [DBG] from='client.? 192.168.123.100:0/929911573' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 0}]: dispatch 2026-03-10T13:45:44.749 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:45:44 vm07 bash[23044]: audit 2026-03-10T13:45:43.667127+0000 mon.c (mon.1) 9 : audit [DBG] from='client.? 192.168.123.100:0/929911573' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 0}]: dispatch 2026-03-10T13:45:44.749 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:45:44 vm07 bash[23044]: audit 2026-03-10T13:45:43.667127+0000 mon.c (mon.1) 9 : audit [DBG] from='client.? 192.168.123.100:0/929911573' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 0}]: dispatch 2026-03-10T13:45:45.587 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:45:45 vm08 bash[23387]: cluster 2026-03-10T13:45:44.220358+0000 mgr.a (mgr.14150) 153 : cluster [DBG] pgmap v109: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:45:45.587 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:45:45 vm08 bash[23387]: cluster 2026-03-10T13:45:44.220358+0000 mgr.a (mgr.14150) 153 : cluster [DBG] pgmap v109: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:45:45.716 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:45:45 vm00 bash[20748]: cluster 2026-03-10T13:45:44.220358+0000 mgr.a (mgr.14150) 153 : cluster [DBG] pgmap v109: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:45:45.716 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:45:45 vm00 bash[20748]: cluster 2026-03-10T13:45:44.220358+0000 mgr.a (mgr.14150) 153 : cluster [DBG] pgmap v109: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:45:45.749 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:45:45 vm07 bash[23044]: cluster 2026-03-10T13:45:44.220358+0000 mgr.a (mgr.14150) 153 : cluster [DBG] pgmap v109: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:45:45.749 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:45:45 vm07 bash[23044]: cluster 2026-03-10T13:45:44.220358+0000 mgr.a (mgr.14150) 153 : cluster [DBG] pgmap v109: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:45:47.425 INFO:teuthology.orchestra.run.vm00.stderr:Inferring config /var/lib/ceph/c9620084-1c86-11f1-bcc5-e3fb709eab0a/mon.a/config 2026-03-10T13:45:47.653 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-10T13:45:47.654 INFO:teuthology.orchestra.run.vm00.stderr:dumped all 2026-03-10T13:45:47.662 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:45:47 vm00 bash[20748]: cluster 2026-03-10T13:45:46.220595+0000 mgr.a (mgr.14150) 154 : cluster [DBG] pgmap v110: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:45:47.662 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:45:47 vm00 bash[20748]: cluster 2026-03-10T13:45:46.220595+0000 mgr.a (mgr.14150) 154 : cluster [DBG] pgmap v110: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:45:47.701 INFO:teuthology.orchestra.run.vm00.stdout:{"pg_ready":true,"pg_map":{"version":110,"stamp":"2026-03-10T13:45:46.220488+0000","last_osdmap_epoch":0,"last_pg_scan":0,"pg_stats_sum":{"stat_sum":{"num_bytes":459280,"num_objects":2,"num_object_clones":0,"num_object_copies":6,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":2,"num_whiteouts":0,"num_read":46,"num_read_kb":37,"num_write":57,"num_write_kb":584,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"store_stats":{"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},"log_size":32,"ondisk_log_size":32,"up":3,"acting":3,"num_store_stats":0},"osd_stats_sum":{"up_from":0,"seq":0,"num_pgs":3,"num_osds":3,"num_per_pool_osds":3,"num_per_pool_omap_osds":3,"kb":62902272,"kb_used":82808,"kb_used_data":1908,"kb_used_omap":4,"kb_used_meta":80443,"kb_avail":62819464,"statfs":{"total":64411926528,"available":64327131136,"internally_reserved":0,"allocated":1953792,"data_stored":1550235,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":4774,"internal_metadata":82373978},"hb_peers":[],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[],"network_ping_times":[]},"pg_stats_delta":{"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"store_stats":{"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},"log_size":0,"ondisk_log_size":0,"up":0,"acting":0,"num_store_stats":0,"stamp_delta":"12.001512"},"pg_stats":[{"pgid":"1.0","version":"21'32","reported_seq":57,"reported_epoch":22,"state":"active+clean","last_fresh":"2026-03-10T13:45:12.999408+0000","last_change":"2026-03-10T13:45:12.216382+0000","last_active":"2026-03-10T13:45:12.999408+0000","last_peered":"2026-03-10T13:45:12.999408+0000","last_clean":"2026-03-10T13:45:12.999408+0000","last_became_active":"2026-03-10T13:45:12.216187+0000","last_became_peered":"2026-03-10T13:45:12.216187+0000","last_unstale":"2026-03-10T13:45:12.999408+0000","last_undegraded":"2026-03-10T13:45:12.999408+0000","last_fullsized":"2026-03-10T13:45:12.999408+0000","mapping_epoch":20,"log_start":"0'0","ondisk_log_start":"0'0","created":20,"last_epoch_clean":21,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T13:45:10.971687+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T13:45:10.971687+0000","last_clean_scrub_stamp":"2026-03-10T13:45:10.971687+0000","objects_scrubbed":0,"log_size":32,"log_dups_size":0,"ondisk_log_size":32,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T23:37:17.317269+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":459280,"num_objects":2,"num_object_clones":0,"num_object_copies":6,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":2,"num_whiteouts":0,"num_read":46,"num_read_kb":37,"num_write":57,"num_write_kb":584,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[1,2,0],"acting":[1,2,0],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":1,"acting_primary":1,"purged_snaps":[]}],"pool_stats":[{"poolid":1,"num_pg":1,"stat_sum":{"num_bytes":459280,"num_objects":2,"num_object_clones":0,"num_object_copies":6,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":2,"num_whiteouts":0,"num_read":46,"num_read_kb":37,"num_write":57,"num_write_kb":584,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"store_stats":{"total":0,"available":0,"internally_reserved":0,"allocated":1388544,"data_stored":1377840,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},"log_size":32,"ondisk_log_size":32,"up":3,"acting":3,"num_store_stats":3}],"osd_stats":[{"osd":2,"up_from":18,"seq":77309411338,"num_pgs":1,"num_osds":1,"num_per_pool_osds":1,"num_per_pool_omap_osds":1,"kb":20967424,"kb_used":27600,"kb_used_data":636,"kb_used_omap":1,"kb_used_meta":26814,"kb_avail":20939824,"statfs":{"total":21470642176,"available":21442379776,"internally_reserved":0,"allocated":651264,"data_stored":516745,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":1590,"internal_metadata":27457994},"hb_peers":[0,1],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[]},{"osd":1,"up_from":13,"seq":55834574863,"num_pgs":1,"num_osds":1,"num_per_pool_osds":1,"num_per_pool_omap_osds":1,"kb":20967424,"kb_used":27600,"kb_used_data":636,"kb_used_omap":1,"kb_used_meta":26814,"kb_avail":20939824,"statfs":{"total":21470642176,"available":21442379776,"internally_reserved":0,"allocated":651264,"data_stored":516745,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":1590,"internal_metadata":27457994},"hb_peers":[0,2],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[]},{"osd":0,"up_from":8,"seq":34359738390,"num_pgs":1,"num_osds":1,"num_per_pool_osds":1,"num_per_pool_omap_osds":1,"kb":20967424,"kb_used":27608,"kb_used_data":636,"kb_used_omap":1,"kb_used_meta":26814,"kb_avail":20939816,"statfs":{"total":21470642176,"available":21442371584,"internally_reserved":0,"allocated":651264,"data_stored":516745,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":1594,"internal_metadata":27457990},"hb_peers":[1,2],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[]}],"pool_statfs":[{"poolid":1,"osd":0,"total":0,"available":0,"internally_reserved":0,"allocated":462848,"data_stored":459280,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":1,"osd":1,"total":0,"available":0,"internally_reserved":0,"allocated":462848,"data_stored":459280,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":1,"osd":2,"total":0,"available":0,"internally_reserved":0,"allocated":462848,"data_stored":459280,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0}]}} 2026-03-10T13:45:47.701 DEBUG:teuthology.orchestra.run.vm00:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid c9620084-1c86-11f1-bcc5-e3fb709eab0a -- ceph pg dump --format=json 2026-03-10T13:45:47.749 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:45:47 vm07 bash[23044]: cluster 2026-03-10T13:45:46.220595+0000 mgr.a (mgr.14150) 154 : cluster [DBG] pgmap v110: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:45:47.749 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:45:47 vm07 bash[23044]: cluster 2026-03-10T13:45:46.220595+0000 mgr.a (mgr.14150) 154 : cluster [DBG] pgmap v110: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:45:47.837 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:45:47 vm08 bash[23387]: cluster 2026-03-10T13:45:46.220595+0000 mgr.a (mgr.14150) 154 : cluster [DBG] pgmap v110: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:45:47.837 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:45:47 vm08 bash[23387]: cluster 2026-03-10T13:45:46.220595+0000 mgr.a (mgr.14150) 154 : cluster [DBG] pgmap v110: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:45:48.716 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:45:48 vm00 bash[20748]: audit 2026-03-10T13:45:47.654623+0000 mgr.a (mgr.14150) 155 : audit [DBG] from='client.14346 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-10T13:45:48.716 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:45:48 vm00 bash[20748]: audit 2026-03-10T13:45:47.654623+0000 mgr.a (mgr.14150) 155 : audit [DBG] from='client.14346 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-10T13:45:48.749 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:45:48 vm07 bash[23044]: audit 2026-03-10T13:45:47.654623+0000 mgr.a (mgr.14150) 155 : audit [DBG] from='client.14346 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-10T13:45:48.749 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:45:48 vm07 bash[23044]: audit 2026-03-10T13:45:47.654623+0000 mgr.a (mgr.14150) 155 : audit [DBG] from='client.14346 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-10T13:45:48.837 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:45:48 vm08 bash[23387]: audit 2026-03-10T13:45:47.654623+0000 mgr.a (mgr.14150) 155 : audit [DBG] from='client.14346 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-10T13:45:48.837 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:45:48 vm08 bash[23387]: audit 2026-03-10T13:45:47.654623+0000 mgr.a (mgr.14150) 155 : audit [DBG] from='client.14346 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-10T13:45:49.716 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:45:49 vm00 bash[20748]: cluster 2026-03-10T13:45:48.220839+0000 mgr.a (mgr.14150) 156 : cluster [DBG] pgmap v111: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:45:49.716 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:45:49 vm00 bash[20748]: cluster 2026-03-10T13:45:48.220839+0000 mgr.a (mgr.14150) 156 : cluster [DBG] pgmap v111: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:45:49.749 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:45:49 vm07 bash[23044]: cluster 2026-03-10T13:45:48.220839+0000 mgr.a (mgr.14150) 156 : cluster [DBG] pgmap v111: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:45:49.749 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:45:49 vm07 bash[23044]: cluster 2026-03-10T13:45:48.220839+0000 mgr.a (mgr.14150) 156 : cluster [DBG] pgmap v111: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:45:49.837 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:45:49 vm08 bash[23387]: cluster 2026-03-10T13:45:48.220839+0000 mgr.a (mgr.14150) 156 : cluster [DBG] pgmap v111: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:45:49.837 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:45:49 vm08 bash[23387]: cluster 2026-03-10T13:45:48.220839+0000 mgr.a (mgr.14150) 156 : cluster [DBG] pgmap v111: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:45:51.435 INFO:teuthology.orchestra.run.vm00.stderr:Inferring config /var/lib/ceph/c9620084-1c86-11f1-bcc5-e3fb709eab0a/mon.a/config 2026-03-10T13:45:51.677 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-10T13:45:51.677 INFO:teuthology.orchestra.run.vm00.stderr:dumped all 2026-03-10T13:45:51.685 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:45:51 vm00 bash[20748]: cluster 2026-03-10T13:45:50.221051+0000 mgr.a (mgr.14150) 157 : cluster [DBG] pgmap v112: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:45:51.686 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:45:51 vm00 bash[20748]: cluster 2026-03-10T13:45:50.221051+0000 mgr.a (mgr.14150) 157 : cluster [DBG] pgmap v112: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:45:51.722 INFO:teuthology.orchestra.run.vm00.stdout:{"pg_ready":true,"pg_map":{"version":112,"stamp":"2026-03-10T13:45:50.220975+0000","last_osdmap_epoch":0,"last_pg_scan":0,"pg_stats_sum":{"stat_sum":{"num_bytes":459280,"num_objects":2,"num_object_clones":0,"num_object_copies":6,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":2,"num_whiteouts":0,"num_read":46,"num_read_kb":37,"num_write":57,"num_write_kb":584,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"store_stats":{"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},"log_size":32,"ondisk_log_size":32,"up":3,"acting":3,"num_store_stats":0},"osd_stats_sum":{"up_from":0,"seq":0,"num_pgs":3,"num_osds":3,"num_per_pool_osds":3,"num_per_pool_omap_osds":3,"kb":62902272,"kb_used":82808,"kb_used_data":1908,"kb_used_omap":4,"kb_used_meta":80443,"kb_avail":62819464,"statfs":{"total":64411926528,"available":64327131136,"internally_reserved":0,"allocated":1953792,"data_stored":1550235,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":4774,"internal_metadata":82373978},"hb_peers":[],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[],"network_ping_times":[]},"pg_stats_delta":{"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"store_stats":{"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},"log_size":0,"ondisk_log_size":0,"up":0,"acting":0,"num_store_stats":0,"stamp_delta":"12.001507"},"pg_stats":[{"pgid":"1.0","version":"21'32","reported_seq":57,"reported_epoch":22,"state":"active+clean","last_fresh":"2026-03-10T13:45:12.999408+0000","last_change":"2026-03-10T13:45:12.216382+0000","last_active":"2026-03-10T13:45:12.999408+0000","last_peered":"2026-03-10T13:45:12.999408+0000","last_clean":"2026-03-10T13:45:12.999408+0000","last_became_active":"2026-03-10T13:45:12.216187+0000","last_became_peered":"2026-03-10T13:45:12.216187+0000","last_unstale":"2026-03-10T13:45:12.999408+0000","last_undegraded":"2026-03-10T13:45:12.999408+0000","last_fullsized":"2026-03-10T13:45:12.999408+0000","mapping_epoch":20,"log_start":"0'0","ondisk_log_start":"0'0","created":20,"last_epoch_clean":21,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T13:45:10.971687+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T13:45:10.971687+0000","last_clean_scrub_stamp":"2026-03-10T13:45:10.971687+0000","objects_scrubbed":0,"log_size":32,"log_dups_size":0,"ondisk_log_size":32,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T23:37:17.317269+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":459280,"num_objects":2,"num_object_clones":0,"num_object_copies":6,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":2,"num_whiteouts":0,"num_read":46,"num_read_kb":37,"num_write":57,"num_write_kb":584,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[1,2,0],"acting":[1,2,0],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":1,"acting_primary":1,"purged_snaps":[]}],"pool_stats":[{"poolid":1,"num_pg":1,"stat_sum":{"num_bytes":459280,"num_objects":2,"num_object_clones":0,"num_object_copies":6,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":2,"num_whiteouts":0,"num_read":46,"num_read_kb":37,"num_write":57,"num_write_kb":584,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"store_stats":{"total":0,"available":0,"internally_reserved":0,"allocated":1388544,"data_stored":1377840,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},"log_size":32,"ondisk_log_size":32,"up":3,"acting":3,"num_store_stats":3}],"osd_stats":[{"osd":2,"up_from":18,"seq":77309411339,"num_pgs":1,"num_osds":1,"num_per_pool_osds":1,"num_per_pool_omap_osds":1,"kb":20967424,"kb_used":27600,"kb_used_data":636,"kb_used_omap":1,"kb_used_meta":26814,"kb_avail":20939824,"statfs":{"total":21470642176,"available":21442379776,"internally_reserved":0,"allocated":651264,"data_stored":516745,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":1590,"internal_metadata":27457994},"hb_peers":[0,1],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[]},{"osd":1,"up_from":13,"seq":55834574864,"num_pgs":1,"num_osds":1,"num_per_pool_osds":1,"num_per_pool_omap_osds":1,"kb":20967424,"kb_used":27600,"kb_used_data":636,"kb_used_omap":1,"kb_used_meta":26814,"kb_avail":20939824,"statfs":{"total":21470642176,"available":21442379776,"internally_reserved":0,"allocated":651264,"data_stored":516745,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":1590,"internal_metadata":27457994},"hb_peers":[0,2],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[]},{"osd":0,"up_from":8,"seq":34359738390,"num_pgs":1,"num_osds":1,"num_per_pool_osds":1,"num_per_pool_omap_osds":1,"kb":20967424,"kb_used":27608,"kb_used_data":636,"kb_used_omap":1,"kb_used_meta":26814,"kb_avail":20939816,"statfs":{"total":21470642176,"available":21442371584,"internally_reserved":0,"allocated":651264,"data_stored":516745,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":1594,"internal_metadata":27457990},"hb_peers":[1,2],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[]}],"pool_statfs":[{"poolid":1,"osd":0,"total":0,"available":0,"internally_reserved":0,"allocated":462848,"data_stored":459280,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":1,"osd":1,"total":0,"available":0,"internally_reserved":0,"allocated":462848,"data_stored":459280,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":1,"osd":2,"total":0,"available":0,"internally_reserved":0,"allocated":462848,"data_stored":459280,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0}]}} 2026-03-10T13:45:51.723 INFO:tasks.cephadm.ceph_manager.ceph:clean! 2026-03-10T13:45:51.723 INFO:tasks.ceph:Waiting until ceph cluster ceph is healthy... 2026-03-10T13:45:51.723 INFO:tasks.cephadm.ceph_manager.ceph:wait_until_healthy 2026-03-10T13:45:51.723 DEBUG:teuthology.orchestra.run.vm00:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid c9620084-1c86-11f1-bcc5-e3fb709eab0a -- ceph health --format=json 2026-03-10T13:45:51.749 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:45:51 vm07 bash[23044]: cluster 2026-03-10T13:45:50.221051+0000 mgr.a (mgr.14150) 157 : cluster [DBG] pgmap v112: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:45:51.749 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:45:51 vm07 bash[23044]: cluster 2026-03-10T13:45:50.221051+0000 mgr.a (mgr.14150) 157 : cluster [DBG] pgmap v112: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:45:51.837 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:45:51 vm08 bash[23387]: cluster 2026-03-10T13:45:50.221051+0000 mgr.a (mgr.14150) 157 : cluster [DBG] pgmap v112: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:45:51.837 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:45:51 vm08 bash[23387]: cluster 2026-03-10T13:45:50.221051+0000 mgr.a (mgr.14150) 157 : cluster [DBG] pgmap v112: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:45:52.716 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:45:52 vm00 bash[20748]: audit 2026-03-10T13:45:51.677922+0000 mgr.a (mgr.14150) 158 : audit [DBG] from='client.14352 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-10T13:45:52.716 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:45:52 vm00 bash[20748]: audit 2026-03-10T13:45:51.677922+0000 mgr.a (mgr.14150) 158 : audit [DBG] from='client.14352 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-10T13:45:52.749 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:45:52 vm07 bash[23044]: audit 2026-03-10T13:45:51.677922+0000 mgr.a (mgr.14150) 158 : audit [DBG] from='client.14352 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-10T13:45:52.749 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:45:52 vm07 bash[23044]: audit 2026-03-10T13:45:51.677922+0000 mgr.a (mgr.14150) 158 : audit [DBG] from='client.14352 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-10T13:45:52.837 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:45:52 vm08 bash[23387]: audit 2026-03-10T13:45:51.677922+0000 mgr.a (mgr.14150) 158 : audit [DBG] from='client.14352 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-10T13:45:52.837 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:45:52 vm08 bash[23387]: audit 2026-03-10T13:45:51.677922+0000 mgr.a (mgr.14150) 158 : audit [DBG] from='client.14352 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-10T13:45:53.716 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:45:53 vm00 bash[20748]: cluster 2026-03-10T13:45:52.221309+0000 mgr.a (mgr.14150) 159 : cluster [DBG] pgmap v113: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:45:53.716 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:45:53 vm00 bash[20748]: cluster 2026-03-10T13:45:52.221309+0000 mgr.a (mgr.14150) 159 : cluster [DBG] pgmap v113: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:45:53.749 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:45:53 vm07 bash[23044]: cluster 2026-03-10T13:45:52.221309+0000 mgr.a (mgr.14150) 159 : cluster [DBG] pgmap v113: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:45:53.749 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:45:53 vm07 bash[23044]: cluster 2026-03-10T13:45:52.221309+0000 mgr.a (mgr.14150) 159 : cluster [DBG] pgmap v113: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:45:53.837 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:45:53 vm08 bash[23387]: cluster 2026-03-10T13:45:52.221309+0000 mgr.a (mgr.14150) 159 : cluster [DBG] pgmap v113: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:45:53.837 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:45:53 vm08 bash[23387]: cluster 2026-03-10T13:45:52.221309+0000 mgr.a (mgr.14150) 159 : cluster [DBG] pgmap v113: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:45:55.446 INFO:teuthology.orchestra.run.vm00.stderr:Inferring config /var/lib/ceph/c9620084-1c86-11f1-bcc5-e3fb709eab0a/mon.a/config 2026-03-10T13:45:55.700 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-10T13:45:55.700 INFO:teuthology.orchestra.run.vm00.stdout:{"status":"HEALTH_OK","checks":{},"mutes":[]} 2026-03-10T13:45:55.714 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:45:55 vm00 bash[20748]: cluster 2026-03-10T13:45:54.221590+0000 mgr.a (mgr.14150) 160 : cluster [DBG] pgmap v114: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:45:55.714 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:45:55 vm00 bash[20748]: cluster 2026-03-10T13:45:54.221590+0000 mgr.a (mgr.14150) 160 : cluster [DBG] pgmap v114: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:45:55.749 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:45:55 vm07 bash[23044]: cluster 2026-03-10T13:45:54.221590+0000 mgr.a (mgr.14150) 160 : cluster [DBG] pgmap v114: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:45:55.749 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:45:55 vm07 bash[23044]: cluster 2026-03-10T13:45:54.221590+0000 mgr.a (mgr.14150) 160 : cluster [DBG] pgmap v114: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:45:55.751 INFO:tasks.cephadm.ceph_manager.ceph:wait_until_healthy done 2026-03-10T13:45:55.751 INFO:tasks.cephadm:Setup complete, yielding 2026-03-10T13:45:55.751 INFO:teuthology.run_tasks:Running task cephadm.shell... 2026-03-10T13:45:55.753 INFO:tasks.cephadm:Running commands on role host.a host ubuntu@vm00.local 2026-03-10T13:45:55.753 DEBUG:teuthology.orchestra.run.vm00:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid c9620084-1c86-11f1-bcc5-e3fb709eab0a -- bash -c 'set -e 2026-03-10T13:45:55.753 DEBUG:teuthology.orchestra.run.vm00:> set -x 2026-03-10T13:45:55.753 DEBUG:teuthology.orchestra.run.vm00:> ceph orch apply node-exporter 2026-03-10T13:45:55.753 DEBUG:teuthology.orchestra.run.vm00:> ceph orch apply grafana 2026-03-10T13:45:55.753 DEBUG:teuthology.orchestra.run.vm00:> ceph orch apply alertmanager 2026-03-10T13:45:55.753 DEBUG:teuthology.orchestra.run.vm00:> ceph orch apply prometheus 2026-03-10T13:45:55.753 DEBUG:teuthology.orchestra.run.vm00:> sleep 240 2026-03-10T13:45:55.753 DEBUG:teuthology.orchestra.run.vm00:> ceph orch ls 2026-03-10T13:45:55.753 DEBUG:teuthology.orchestra.run.vm00:> ceph orch ps 2026-03-10T13:45:55.753 DEBUG:teuthology.orchestra.run.vm00:> ceph orch host ls 2026-03-10T13:45:55.753 DEBUG:teuthology.orchestra.run.vm00:> MON_DAEMON=$(ceph orch ps --daemon-type mon -f json | jq -r '"'"'last | .daemon_name'"'"') 2026-03-10T13:45:55.753 DEBUG:teuthology.orchestra.run.vm00:> GRAFANA_HOST=$(ceph orch ps --daemon-type grafana -f json | jq -e '"'"'.[]'"'"' | jq -r '"'"'.hostname'"'"') 2026-03-10T13:45:55.753 DEBUG:teuthology.orchestra.run.vm00:> PROM_HOST=$(ceph orch ps --daemon-type prometheus -f json | jq -e '"'"'.[]'"'"' | jq -r '"'"'.hostname'"'"') 2026-03-10T13:45:55.753 DEBUG:teuthology.orchestra.run.vm00:> ALERTM_HOST=$(ceph orch ps --daemon-type alertmanager -f json | jq -e '"'"'.[]'"'"' | jq -r '"'"'.hostname'"'"') 2026-03-10T13:45:55.753 DEBUG:teuthology.orchestra.run.vm00:> GRAFANA_IP=$(ceph orch host ls -f json | jq -r --arg GRAFANA_HOST "$GRAFANA_HOST" '"'"'.[] | select(.hostname==$GRAFANA_HOST) | .addr'"'"') 2026-03-10T13:45:55.753 DEBUG:teuthology.orchestra.run.vm00:> PROM_IP=$(ceph orch host ls -f json | jq -r --arg PROM_HOST "$PROM_HOST" '"'"'.[] | select(.hostname==$PROM_HOST) | .addr'"'"') 2026-03-10T13:45:55.753 DEBUG:teuthology.orchestra.run.vm00:> ALERTM_IP=$(ceph orch host ls -f json | jq -r --arg ALERTM_HOST "$ALERTM_HOST" '"'"'.[] | select(.hostname==$ALERTM_HOST) | .addr'"'"') 2026-03-10T13:45:55.753 DEBUG:teuthology.orchestra.run.vm00:> # check each host node-exporter metrics endpoint is responsive 2026-03-10T13:45:55.753 DEBUG:teuthology.orchestra.run.vm00:> ALL_HOST_IPS=$(ceph orch host ls -f json | jq -r '"'"'.[] | .addr'"'"') 2026-03-10T13:45:55.753 DEBUG:teuthology.orchestra.run.vm00:> for ip in $ALL_HOST_IPS; do 2026-03-10T13:45:55.753 DEBUG:teuthology.orchestra.run.vm00:> curl -s http://${ip}:9100/metric 2026-03-10T13:45:55.753 DEBUG:teuthology.orchestra.run.vm00:> done 2026-03-10T13:45:55.753 DEBUG:teuthology.orchestra.run.vm00:> # check grafana endpoints are responsive and database health is okay 2026-03-10T13:45:55.753 DEBUG:teuthology.orchestra.run.vm00:> curl -k -s https://${GRAFANA_IP}:3000/api/health 2026-03-10T13:45:55.753 DEBUG:teuthology.orchestra.run.vm00:> curl -k -s https://${GRAFANA_IP}:3000/api/health | jq -e '"'"'.database == "ok"'"'"' 2026-03-10T13:45:55.753 DEBUG:teuthology.orchestra.run.vm00:> # stop mon daemon in order to trigger an alert 2026-03-10T13:45:55.753 DEBUG:teuthology.orchestra.run.vm00:> ceph orch daemon stop $MON_DAEMON 2026-03-10T13:45:55.753 DEBUG:teuthology.orchestra.run.vm00:> sleep 120 2026-03-10T13:45:55.753 DEBUG:teuthology.orchestra.run.vm00:> # check prometheus endpoints are responsive and mon down alert is firing 2026-03-10T13:45:55.753 DEBUG:teuthology.orchestra.run.vm00:> curl -s http://${PROM_IP}:9095/api/v1/status/config 2026-03-10T13:45:55.753 DEBUG:teuthology.orchestra.run.vm00:> curl -s http://${PROM_IP}:9095/api/v1/status/config | jq -e '"'"'.status == "success"'"'"' 2026-03-10T13:45:55.753 DEBUG:teuthology.orchestra.run.vm00:> curl -s http://${PROM_IP}:9095/api/v1/alerts 2026-03-10T13:45:55.753 DEBUG:teuthology.orchestra.run.vm00:> curl -s http://${PROM_IP}:9095/api/v1/alerts | jq -e '"'"'.data | .alerts | .[] | select(.labels | .alertname == "CephMonDown") | .state == "firing"'"'"' 2026-03-10T13:45:55.753 DEBUG:teuthology.orchestra.run.vm00:> # check alertmanager endpoints are responsive and mon down alert is active 2026-03-10T13:45:55.754 DEBUG:teuthology.orchestra.run.vm00:> curl -s http://${ALERTM_IP}:9093/api/v2/status 2026-03-10T13:45:55.754 DEBUG:teuthology.orchestra.run.vm00:> curl -s http://${ALERTM_IP}:9093/api/v2/alerts 2026-03-10T13:45:55.754 DEBUG:teuthology.orchestra.run.vm00:> curl -s http://${ALERTM_IP}:9093/api/v2/alerts | jq -e '"'"'.[] | select(.labels | .alertname == "CephMonDown") | .status | .state == "active"'"'"' 2026-03-10T13:45:55.754 DEBUG:teuthology.orchestra.run.vm00:> ' 2026-03-10T13:45:55.837 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:45:55 vm08 bash[23387]: cluster 2026-03-10T13:45:54.221590+0000 mgr.a (mgr.14150) 160 : cluster [DBG] pgmap v114: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:45:55.837 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:45:55 vm08 bash[23387]: cluster 2026-03-10T13:45:54.221590+0000 mgr.a (mgr.14150) 160 : cluster [DBG] pgmap v114: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:45:56.716 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:45:56 vm00 bash[20748]: audit 2026-03-10T13:45:55.701720+0000 mon.a (mon.0) 435 : audit [DBG] from='client.? 192.168.123.100:0/1056822412' entity='client.admin' cmd=[{"prefix": "health", "format": "json"}]: dispatch 2026-03-10T13:45:56.716 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:45:56 vm00 bash[20748]: audit 2026-03-10T13:45:55.701720+0000 mon.a (mon.0) 435 : audit [DBG] from='client.? 192.168.123.100:0/1056822412' entity='client.admin' cmd=[{"prefix": "health", "format": "json"}]: dispatch 2026-03-10T13:45:56.749 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:45:56 vm07 bash[23044]: audit 2026-03-10T13:45:55.701720+0000 mon.a (mon.0) 435 : audit [DBG] from='client.? 192.168.123.100:0/1056822412' entity='client.admin' cmd=[{"prefix": "health", "format": "json"}]: dispatch 2026-03-10T13:45:56.749 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:45:56 vm07 bash[23044]: audit 2026-03-10T13:45:55.701720+0000 mon.a (mon.0) 435 : audit [DBG] from='client.? 192.168.123.100:0/1056822412' entity='client.admin' cmd=[{"prefix": "health", "format": "json"}]: dispatch 2026-03-10T13:45:56.837 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:45:56 vm08 bash[23387]: audit 2026-03-10T13:45:55.701720+0000 mon.a (mon.0) 435 : audit [DBG] from='client.? 192.168.123.100:0/1056822412' entity='client.admin' cmd=[{"prefix": "health", "format": "json"}]: dispatch 2026-03-10T13:45:56.837 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:45:56 vm08 bash[23387]: audit 2026-03-10T13:45:55.701720+0000 mon.a (mon.0) 435 : audit [DBG] from='client.? 192.168.123.100:0/1056822412' entity='client.admin' cmd=[{"prefix": "health", "format": "json"}]: dispatch 2026-03-10T13:45:57.716 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:45:57 vm00 bash[20748]: cluster 2026-03-10T13:45:56.221850+0000 mgr.a (mgr.14150) 161 : cluster [DBG] pgmap v115: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:45:57.716 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:45:57 vm00 bash[20748]: cluster 2026-03-10T13:45:56.221850+0000 mgr.a (mgr.14150) 161 : cluster [DBG] pgmap v115: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:45:57.749 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:45:57 vm07 bash[23044]: cluster 2026-03-10T13:45:56.221850+0000 mgr.a (mgr.14150) 161 : cluster [DBG] pgmap v115: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:45:57.749 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:45:57 vm07 bash[23044]: cluster 2026-03-10T13:45:56.221850+0000 mgr.a (mgr.14150) 161 : cluster [DBG] pgmap v115: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:45:57.837 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:45:57 vm08 bash[23387]: cluster 2026-03-10T13:45:56.221850+0000 mgr.a (mgr.14150) 161 : cluster [DBG] pgmap v115: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:45:57.837 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:45:57 vm08 bash[23387]: cluster 2026-03-10T13:45:56.221850+0000 mgr.a (mgr.14150) 161 : cluster [DBG] pgmap v115: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:45:59.455 INFO:teuthology.orchestra.run.vm00.stderr:Inferring config /var/lib/ceph/c9620084-1c86-11f1-bcc5-e3fb709eab0a/mon.a/config 2026-03-10T13:45:59.566 INFO:teuthology.orchestra.run.vm00.stderr:+ ceph orch apply node-exporter 2026-03-10T13:45:59.716 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:45:59 vm00 bash[20748]: cluster 2026-03-10T13:45:58.222067+0000 mgr.a (mgr.14150) 162 : cluster [DBG] pgmap v116: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:45:59.716 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:45:59 vm00 bash[20748]: cluster 2026-03-10T13:45:58.222067+0000 mgr.a (mgr.14150) 162 : cluster [DBG] pgmap v116: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:45:59.719 INFO:teuthology.orchestra.run.vm00.stdout:Scheduled node-exporter update... 2026-03-10T13:45:59.749 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:45:59 vm07 bash[23044]: cluster 2026-03-10T13:45:58.222067+0000 mgr.a (mgr.14150) 162 : cluster [DBG] pgmap v116: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:45:59.749 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:45:59 vm07 bash[23044]: cluster 2026-03-10T13:45:58.222067+0000 mgr.a (mgr.14150) 162 : cluster [DBG] pgmap v116: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:45:59.757 INFO:teuthology.orchestra.run.vm00.stderr:+ ceph orch apply grafana 2026-03-10T13:45:59.837 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:45:59 vm08 bash[23387]: cluster 2026-03-10T13:45:58.222067+0000 mgr.a (mgr.14150) 162 : cluster [DBG] pgmap v116: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:45:59.837 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:45:59 vm08 bash[23387]: cluster 2026-03-10T13:45:58.222067+0000 mgr.a (mgr.14150) 162 : cluster [DBG] pgmap v116: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:45:59.903 INFO:teuthology.orchestra.run.vm00.stdout:Scheduled grafana update... 2026-03-10T13:45:59.920 INFO:teuthology.orchestra.run.vm00.stderr:+ ceph orch apply alertmanager 2026-03-10T13:46:00.104 INFO:teuthology.orchestra.run.vm00.stdout:Scheduled alertmanager update... 2026-03-10T13:46:00.116 INFO:teuthology.orchestra.run.vm00.stderr:+ ceph orch apply prometheus 2026-03-10T13:46:00.276 INFO:teuthology.orchestra.run.vm00.stdout:Scheduled prometheus update... 2026-03-10T13:46:00.299 INFO:teuthology.orchestra.run.vm00.stderr:+ sleep 240 2026-03-10T13:46:00.568 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:46:00 vm00 systemd[1]: /etc/systemd/system/ceph-c9620084-1c86-11f1-bcc5-e3fb709eab0a@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T13:46:00.568 INFO:journalctl@ceph.mgr.a.vm00.stdout:Mar 10 13:46:00 vm00 systemd[1]: /etc/systemd/system/ceph-c9620084-1c86-11f1-bcc5-e3fb709eab0a@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T13:46:00.568 INFO:journalctl@ceph.osd.0.vm00.stdout:Mar 10 13:46:00 vm00 systemd[1]: /etc/systemd/system/ceph-c9620084-1c86-11f1-bcc5-e3fb709eab0a@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T13:46:00.888 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:46:00 vm00 systemd[1]: /etc/systemd/system/ceph-c9620084-1c86-11f1-bcc5-e3fb709eab0a@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T13:46:00.888 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:46:00 vm00 bash[20748]: audit 2026-03-10T13:45:59.711810+0000 mgr.a (mgr.14150) 163 : audit [DBG] from='client.14364 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "node-exporter", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T13:46:00.888 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:46:00 vm00 bash[20748]: audit 2026-03-10T13:45:59.711810+0000 mgr.a (mgr.14150) 163 : audit [DBG] from='client.14364 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "node-exporter", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T13:46:00.888 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:46:00 vm00 bash[20748]: cephadm 2026-03-10T13:45:59.712502+0000 mgr.a (mgr.14150) 164 : cephadm [INF] Saving service node-exporter spec with placement * 2026-03-10T13:46:00.888 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:46:00 vm00 bash[20748]: cephadm 2026-03-10T13:45:59.712502+0000 mgr.a (mgr.14150) 164 : cephadm [INF] Saving service node-exporter spec with placement * 2026-03-10T13:46:00.888 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:46:00 vm00 bash[20748]: audit 2026-03-10T13:45:59.716304+0000 mon.a (mon.0) 436 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:46:00.888 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:46:00 vm00 bash[20748]: audit 2026-03-10T13:45:59.716304+0000 mon.a (mon.0) 436 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:46:00.888 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:46:00 vm00 bash[20748]: audit 2026-03-10T13:45:59.716779+0000 mon.a (mon.0) 437 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T13:46:00.888 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:46:00 vm00 bash[20748]: audit 2026-03-10T13:45:59.716779+0000 mon.a (mon.0) 437 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T13:46:00.888 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:46:00 vm00 bash[20748]: audit 2026-03-10T13:45:59.895463+0000 mgr.a (mgr.14150) 165 : audit [DBG] from='client.14370 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "grafana", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T13:46:00.888 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:46:00 vm00 bash[20748]: audit 2026-03-10T13:45:59.895463+0000 mgr.a (mgr.14150) 165 : audit [DBG] from='client.14370 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "grafana", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T13:46:00.888 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:46:00 vm00 bash[20748]: cephadm 2026-03-10T13:45:59.896176+0000 mgr.a (mgr.14150) 166 : cephadm [INF] Saving service grafana spec with placement count:1 2026-03-10T13:46:00.888 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:46:00 vm00 bash[20748]: cephadm 2026-03-10T13:45:59.896176+0000 mgr.a (mgr.14150) 166 : cephadm [INF] Saving service grafana spec with placement count:1 2026-03-10T13:46:00.888 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:46:00 vm00 bash[20748]: audit 2026-03-10T13:45:59.900167+0000 mon.a (mon.0) 438 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:46:00.888 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:46:00 vm00 bash[20748]: audit 2026-03-10T13:45:59.900167+0000 mon.a (mon.0) 438 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:46:00.888 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:46:00 vm00 bash[20748]: audit 2026-03-10T13:46:00.025051+0000 mon.a (mon.0) 439 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T13:46:00.888 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:46:00 vm00 bash[20748]: audit 2026-03-10T13:46:00.025051+0000 mon.a (mon.0) 439 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T13:46:00.888 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:46:00 vm00 bash[20748]: audit 2026-03-10T13:46:00.025808+0000 mon.a (mon.0) 440 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T13:46:00.888 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:46:00 vm00 bash[20748]: audit 2026-03-10T13:46:00.025808+0000 mon.a (mon.0) 440 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T13:46:00.888 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:46:00 vm00 bash[20748]: audit 2026-03-10T13:46:00.030746+0000 mon.a (mon.0) 441 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:46:00.888 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:46:00 vm00 bash[20748]: audit 2026-03-10T13:46:00.030746+0000 mon.a (mon.0) 441 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:46:00.888 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:46:00 vm00 bash[20748]: cephadm 2026-03-10T13:46:00.031672+0000 mgr.a (mgr.14150) 167 : cephadm [INF] Deploying daemon node-exporter.vm00 on vm00 2026-03-10T13:46:00.888 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:46:00 vm00 bash[20748]: cephadm 2026-03-10T13:46:00.031672+0000 mgr.a (mgr.14150) 167 : cephadm [INF] Deploying daemon node-exporter.vm00 on vm00 2026-03-10T13:46:00.888 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:46:00 vm00 bash[20748]: audit 2026-03-10T13:46:00.102919+0000 mon.a (mon.0) 442 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:46:00.888 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:46:00 vm00 bash[20748]: audit 2026-03-10T13:46:00.102919+0000 mon.a (mon.0) 442 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:46:00.888 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:46:00 vm00 bash[20748]: audit 2026-03-10T13:46:00.273461+0000 mon.a (mon.0) 443 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:46:00.888 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:46:00 vm00 bash[20748]: audit 2026-03-10T13:46:00.273461+0000 mon.a (mon.0) 443 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:46:00.888 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:46:00 vm00 bash[20748]: audit 2026-03-10T13:46:00.681507+0000 mon.a (mon.0) 444 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:46:00.888 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:46:00 vm00 bash[20748]: audit 2026-03-10T13:46:00.681507+0000 mon.a (mon.0) 444 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:46:00.888 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:46:00 vm00 bash[20748]: audit 2026-03-10T13:46:00.686673+0000 mon.a (mon.0) 445 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:46:00.888 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:46:00 vm00 bash[20748]: audit 2026-03-10T13:46:00.686673+0000 mon.a (mon.0) 445 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:46:00.888 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:46:00 vm00 bash[20748]: audit 2026-03-10T13:46:00.689927+0000 mon.a (mon.0) 446 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:46:00.888 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:46:00 vm00 bash[20748]: audit 2026-03-10T13:46:00.689927+0000 mon.a (mon.0) 446 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:46:00.888 INFO:journalctl@ceph.osd.0.vm00.stdout:Mar 10 13:46:00 vm00 systemd[1]: /etc/systemd/system/ceph-c9620084-1c86-11f1-bcc5-e3fb709eab0a@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T13:46:00.888 INFO:journalctl@ceph.mgr.a.vm00.stdout:Mar 10 13:46:00 vm00 systemd[1]: /etc/systemd/system/ceph-c9620084-1c86-11f1-bcc5-e3fb709eab0a@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T13:46:00.999 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:46:00 vm07 bash[23044]: audit 2026-03-10T13:45:59.711810+0000 mgr.a (mgr.14150) 163 : audit [DBG] from='client.14364 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "node-exporter", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T13:46:00.999 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:46:00 vm07 bash[23044]: audit 2026-03-10T13:45:59.711810+0000 mgr.a (mgr.14150) 163 : audit [DBG] from='client.14364 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "node-exporter", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T13:46:00.999 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:46:00 vm07 bash[23044]: cephadm 2026-03-10T13:45:59.712502+0000 mgr.a (mgr.14150) 164 : cephadm [INF] Saving service node-exporter spec with placement * 2026-03-10T13:46:00.999 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:46:00 vm07 bash[23044]: cephadm 2026-03-10T13:45:59.712502+0000 mgr.a (mgr.14150) 164 : cephadm [INF] Saving service node-exporter spec with placement * 2026-03-10T13:46:00.999 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:46:00 vm07 bash[23044]: audit 2026-03-10T13:45:59.716304+0000 mon.a (mon.0) 436 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:46:00.999 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:46:00 vm07 bash[23044]: audit 2026-03-10T13:45:59.716304+0000 mon.a (mon.0) 436 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:46:00.999 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:46:00 vm07 bash[23044]: audit 2026-03-10T13:45:59.716779+0000 mon.a (mon.0) 437 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T13:46:00.999 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:46:00 vm07 bash[23044]: audit 2026-03-10T13:45:59.716779+0000 mon.a (mon.0) 437 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T13:46:00.999 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:46:00 vm07 bash[23044]: audit 2026-03-10T13:45:59.895463+0000 mgr.a (mgr.14150) 165 : audit [DBG] from='client.14370 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "grafana", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T13:46:00.999 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:46:00 vm07 bash[23044]: audit 2026-03-10T13:45:59.895463+0000 mgr.a (mgr.14150) 165 : audit [DBG] from='client.14370 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "grafana", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T13:46:00.999 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:46:00 vm07 bash[23044]: cephadm 2026-03-10T13:45:59.896176+0000 mgr.a (mgr.14150) 166 : cephadm [INF] Saving service grafana spec with placement count:1 2026-03-10T13:46:00.999 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:46:00 vm07 bash[23044]: cephadm 2026-03-10T13:45:59.896176+0000 mgr.a (mgr.14150) 166 : cephadm [INF] Saving service grafana spec with placement count:1 2026-03-10T13:46:00.999 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:46:00 vm07 bash[23044]: audit 2026-03-10T13:45:59.900167+0000 mon.a (mon.0) 438 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:46:00.999 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:46:00 vm07 bash[23044]: audit 2026-03-10T13:45:59.900167+0000 mon.a (mon.0) 438 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:46:00.999 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:46:00 vm07 bash[23044]: audit 2026-03-10T13:46:00.025051+0000 mon.a (mon.0) 439 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T13:46:00.999 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:46:00 vm07 bash[23044]: audit 2026-03-10T13:46:00.025051+0000 mon.a (mon.0) 439 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T13:46:00.999 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:46:00 vm07 bash[23044]: audit 2026-03-10T13:46:00.025808+0000 mon.a (mon.0) 440 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T13:46:00.999 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:46:00 vm07 bash[23044]: audit 2026-03-10T13:46:00.025808+0000 mon.a (mon.0) 440 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T13:46:00.999 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:46:00 vm07 bash[23044]: audit 2026-03-10T13:46:00.030746+0000 mon.a (mon.0) 441 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:46:00.999 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:46:00 vm07 bash[23044]: audit 2026-03-10T13:46:00.030746+0000 mon.a (mon.0) 441 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:46:00.999 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:46:00 vm07 bash[23044]: cephadm 2026-03-10T13:46:00.031672+0000 mgr.a (mgr.14150) 167 : cephadm [INF] Deploying daemon node-exporter.vm00 on vm00 2026-03-10T13:46:00.999 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:46:00 vm07 bash[23044]: cephadm 2026-03-10T13:46:00.031672+0000 mgr.a (mgr.14150) 167 : cephadm [INF] Deploying daemon node-exporter.vm00 on vm00 2026-03-10T13:46:00.999 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:46:00 vm07 bash[23044]: audit 2026-03-10T13:46:00.102919+0000 mon.a (mon.0) 442 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:46:00.999 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:46:00 vm07 bash[23044]: audit 2026-03-10T13:46:00.102919+0000 mon.a (mon.0) 442 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:46:00.999 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:46:00 vm07 bash[23044]: audit 2026-03-10T13:46:00.273461+0000 mon.a (mon.0) 443 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:46:00.999 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:46:00 vm07 bash[23044]: audit 2026-03-10T13:46:00.273461+0000 mon.a (mon.0) 443 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:46:01.000 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:46:00 vm07 bash[23044]: audit 2026-03-10T13:46:00.681507+0000 mon.a (mon.0) 444 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:46:01.000 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:46:00 vm07 bash[23044]: audit 2026-03-10T13:46:00.681507+0000 mon.a (mon.0) 444 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:46:01.000 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:46:00 vm07 bash[23044]: audit 2026-03-10T13:46:00.686673+0000 mon.a (mon.0) 445 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:46:01.000 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:46:00 vm07 bash[23044]: audit 2026-03-10T13:46:00.686673+0000 mon.a (mon.0) 445 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:46:01.000 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:46:00 vm07 bash[23044]: audit 2026-03-10T13:46:00.689927+0000 mon.a (mon.0) 446 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:46:01.000 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:46:00 vm07 bash[23044]: audit 2026-03-10T13:46:00.689927+0000 mon.a (mon.0) 446 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:46:01.087 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:46:00 vm08 bash[23387]: audit 2026-03-10T13:45:59.711810+0000 mgr.a (mgr.14150) 163 : audit [DBG] from='client.14364 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "node-exporter", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T13:46:01.088 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:46:00 vm08 bash[23387]: audit 2026-03-10T13:45:59.711810+0000 mgr.a (mgr.14150) 163 : audit [DBG] from='client.14364 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "node-exporter", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T13:46:01.088 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:46:00 vm08 bash[23387]: cephadm 2026-03-10T13:45:59.712502+0000 mgr.a (mgr.14150) 164 : cephadm [INF] Saving service node-exporter spec with placement * 2026-03-10T13:46:01.088 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:46:00 vm08 bash[23387]: cephadm 2026-03-10T13:45:59.712502+0000 mgr.a (mgr.14150) 164 : cephadm [INF] Saving service node-exporter spec with placement * 2026-03-10T13:46:01.088 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:46:00 vm08 bash[23387]: audit 2026-03-10T13:45:59.716304+0000 mon.a (mon.0) 436 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:46:01.088 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:46:00 vm08 bash[23387]: audit 2026-03-10T13:45:59.716304+0000 mon.a (mon.0) 436 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:46:01.088 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:46:00 vm08 bash[23387]: audit 2026-03-10T13:45:59.716779+0000 mon.a (mon.0) 437 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T13:46:01.088 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:46:00 vm08 bash[23387]: audit 2026-03-10T13:45:59.716779+0000 mon.a (mon.0) 437 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T13:46:01.088 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:46:00 vm08 bash[23387]: audit 2026-03-10T13:45:59.895463+0000 mgr.a (mgr.14150) 165 : audit [DBG] from='client.14370 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "grafana", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T13:46:01.088 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:46:00 vm08 bash[23387]: audit 2026-03-10T13:45:59.895463+0000 mgr.a (mgr.14150) 165 : audit [DBG] from='client.14370 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "grafana", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T13:46:01.088 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:46:00 vm08 bash[23387]: cephadm 2026-03-10T13:45:59.896176+0000 mgr.a (mgr.14150) 166 : cephadm [INF] Saving service grafana spec with placement count:1 2026-03-10T13:46:01.088 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:46:00 vm08 bash[23387]: cephadm 2026-03-10T13:45:59.896176+0000 mgr.a (mgr.14150) 166 : cephadm [INF] Saving service grafana spec with placement count:1 2026-03-10T13:46:01.088 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:46:00 vm08 bash[23387]: audit 2026-03-10T13:45:59.900167+0000 mon.a (mon.0) 438 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:46:01.088 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:46:00 vm08 bash[23387]: audit 2026-03-10T13:45:59.900167+0000 mon.a (mon.0) 438 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:46:01.088 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:46:00 vm08 bash[23387]: audit 2026-03-10T13:46:00.025051+0000 mon.a (mon.0) 439 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T13:46:01.088 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:46:00 vm08 bash[23387]: audit 2026-03-10T13:46:00.025051+0000 mon.a (mon.0) 439 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T13:46:01.088 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:46:00 vm08 bash[23387]: audit 2026-03-10T13:46:00.025808+0000 mon.a (mon.0) 440 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T13:46:01.088 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:46:00 vm08 bash[23387]: audit 2026-03-10T13:46:00.025808+0000 mon.a (mon.0) 440 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T13:46:01.088 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:46:00 vm08 bash[23387]: audit 2026-03-10T13:46:00.030746+0000 mon.a (mon.0) 441 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:46:01.088 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:46:00 vm08 bash[23387]: audit 2026-03-10T13:46:00.030746+0000 mon.a (mon.0) 441 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:46:01.088 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:46:00 vm08 bash[23387]: cephadm 2026-03-10T13:46:00.031672+0000 mgr.a (mgr.14150) 167 : cephadm [INF] Deploying daemon node-exporter.vm00 on vm00 2026-03-10T13:46:01.088 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:46:00 vm08 bash[23387]: cephadm 2026-03-10T13:46:00.031672+0000 mgr.a (mgr.14150) 167 : cephadm [INF] Deploying daemon node-exporter.vm00 on vm00 2026-03-10T13:46:01.088 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:46:00 vm08 bash[23387]: audit 2026-03-10T13:46:00.102919+0000 mon.a (mon.0) 442 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:46:01.088 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:46:00 vm08 bash[23387]: audit 2026-03-10T13:46:00.102919+0000 mon.a (mon.0) 442 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:46:01.088 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:46:00 vm08 bash[23387]: audit 2026-03-10T13:46:00.273461+0000 mon.a (mon.0) 443 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:46:01.088 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:46:00 vm08 bash[23387]: audit 2026-03-10T13:46:00.273461+0000 mon.a (mon.0) 443 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:46:01.088 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:46:00 vm08 bash[23387]: audit 2026-03-10T13:46:00.681507+0000 mon.a (mon.0) 444 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:46:01.088 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:46:00 vm08 bash[23387]: audit 2026-03-10T13:46:00.681507+0000 mon.a (mon.0) 444 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:46:01.088 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:46:00 vm08 bash[23387]: audit 2026-03-10T13:46:00.686673+0000 mon.a (mon.0) 445 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:46:01.088 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:46:00 vm08 bash[23387]: audit 2026-03-10T13:46:00.686673+0000 mon.a (mon.0) 445 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:46:01.088 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:46:00 vm08 bash[23387]: audit 2026-03-10T13:46:00.689927+0000 mon.a (mon.0) 446 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:46:01.088 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:46:00 vm08 bash[23387]: audit 2026-03-10T13:46:00.689927+0000 mon.a (mon.0) 446 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:46:01.340 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:46:01 vm07 systemd[1]: /etc/systemd/system/ceph-c9620084-1c86-11f1-bcc5-e3fb709eab0a@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T13:46:01.340 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:46:01 vm07 systemd[1]: /etc/systemd/system/ceph-c9620084-1c86-11f1-bcc5-e3fb709eab0a@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T13:46:01.341 INFO:journalctl@ceph.mgr.b.vm07.stdout:Mar 10 13:46:01 vm07 systemd[1]: /etc/systemd/system/ceph-c9620084-1c86-11f1-bcc5-e3fb709eab0a@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T13:46:01.341 INFO:journalctl@ceph.mgr.b.vm07.stdout:Mar 10 13:46:01 vm07 systemd[1]: /etc/systemd/system/ceph-c9620084-1c86-11f1-bcc5-e3fb709eab0a@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T13:46:01.341 INFO:journalctl@ceph.osd.1.vm07.stdout:Mar 10 13:46:01 vm07 systemd[1]: /etc/systemd/system/ceph-c9620084-1c86-11f1-bcc5-e3fb709eab0a@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T13:46:01.341 INFO:journalctl@ceph.osd.1.vm07.stdout:Mar 10 13:46:01 vm07 systemd[1]: /etc/systemd/system/ceph-c9620084-1c86-11f1-bcc5-e3fb709eab0a@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T13:46:01.980 INFO:journalctl@ceph.osd.2.vm08.stdout:Mar 10 13:46:01 vm08 systemd[1]: /etc/systemd/system/ceph-c9620084-1c86-11f1-bcc5-e3fb709eab0a@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T13:46:01.980 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:46:01 vm08 bash[23387]: audit 2026-03-10T13:46:00.098055+0000 mgr.a (mgr.14150) 168 : audit [DBG] from='client.24245 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "alertmanager", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T13:46:01.980 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:46:01 vm08 bash[23387]: audit 2026-03-10T13:46:00.098055+0000 mgr.a (mgr.14150) 168 : audit [DBG] from='client.24245 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "alertmanager", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T13:46:01.980 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:46:01 vm08 bash[23387]: cephadm 2026-03-10T13:46:00.098694+0000 mgr.a (mgr.14150) 169 : cephadm [INF] Saving service alertmanager spec with placement count:1 2026-03-10T13:46:01.980 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:46:01 vm08 bash[23387]: cephadm 2026-03-10T13:46:00.098694+0000 mgr.a (mgr.14150) 169 : cephadm [INF] Saving service alertmanager spec with placement count:1 2026-03-10T13:46:01.980 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:46:01 vm08 bash[23387]: cluster 2026-03-10T13:46:00.222299+0000 mgr.a (mgr.14150) 170 : cluster [DBG] pgmap v117: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:46:01.980 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:46:01 vm08 bash[23387]: cluster 2026-03-10T13:46:00.222299+0000 mgr.a (mgr.14150) 170 : cluster [DBG] pgmap v117: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:46:01.980 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:46:01 vm08 bash[23387]: audit 2026-03-10T13:46:00.266433+0000 mgr.a (mgr.14150) 171 : audit [DBG] from='client.24251 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "prometheus", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T13:46:01.980 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:46:01 vm08 bash[23387]: audit 2026-03-10T13:46:00.266433+0000 mgr.a (mgr.14150) 171 : audit [DBG] from='client.24251 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "prometheus", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T13:46:01.980 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:46:01 vm08 bash[23387]: cephadm 2026-03-10T13:46:00.267092+0000 mgr.a (mgr.14150) 172 : cephadm [INF] Saving service prometheus spec with placement count:1 2026-03-10T13:46:01.980 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:46:01 vm08 bash[23387]: cephadm 2026-03-10T13:46:00.267092+0000 mgr.a (mgr.14150) 172 : cephadm [INF] Saving service prometheus spec with placement count:1 2026-03-10T13:46:01.980 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:46:01 vm08 bash[23387]: cephadm 2026-03-10T13:46:00.690386+0000 mgr.a (mgr.14150) 173 : cephadm [INF] Deploying daemon node-exporter.vm07 on vm07 2026-03-10T13:46:01.980 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:46:01 vm08 bash[23387]: cephadm 2026-03-10T13:46:00.690386+0000 mgr.a (mgr.14150) 173 : cephadm [INF] Deploying daemon node-exporter.vm07 on vm07 2026-03-10T13:46:01.980 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:46:01 vm08 bash[23387]: audit 2026-03-10T13:46:01.376281+0000 mon.a (mon.0) 447 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:46:01.980 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:46:01 vm08 bash[23387]: audit 2026-03-10T13:46:01.376281+0000 mon.a (mon.0) 447 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:46:01.980 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:46:01 vm08 bash[23387]: audit 2026-03-10T13:46:01.381288+0000 mon.a (mon.0) 448 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:46:01.981 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:46:01 vm08 bash[23387]: audit 2026-03-10T13:46:01.381288+0000 mon.a (mon.0) 448 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:46:01.981 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:46:01 vm08 bash[23387]: audit 2026-03-10T13:46:01.384777+0000 mon.a (mon.0) 449 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:46:01.981 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:46:01 vm08 bash[23387]: audit 2026-03-10T13:46:01.384777+0000 mon.a (mon.0) 449 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:46:01.981 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:46:01 vm08 systemd[1]: /etc/systemd/system/ceph-c9620084-1c86-11f1-bcc5-e3fb709eab0a@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T13:46:01.999 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:46:01 vm07 bash[23044]: audit 2026-03-10T13:46:00.098055+0000 mgr.a (mgr.14150) 168 : audit [DBG] from='client.24245 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "alertmanager", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T13:46:01.999 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:46:01 vm07 bash[23044]: audit 2026-03-10T13:46:00.098055+0000 mgr.a (mgr.14150) 168 : audit [DBG] from='client.24245 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "alertmanager", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T13:46:01.999 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:46:01 vm07 bash[23044]: cephadm 2026-03-10T13:46:00.098694+0000 mgr.a (mgr.14150) 169 : cephadm [INF] Saving service alertmanager spec with placement count:1 2026-03-10T13:46:01.999 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:46:01 vm07 bash[23044]: cephadm 2026-03-10T13:46:00.098694+0000 mgr.a (mgr.14150) 169 : cephadm [INF] Saving service alertmanager spec with placement count:1 2026-03-10T13:46:01.999 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:46:01 vm07 bash[23044]: cluster 2026-03-10T13:46:00.222299+0000 mgr.a (mgr.14150) 170 : cluster [DBG] pgmap v117: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:46:01.999 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:46:01 vm07 bash[23044]: cluster 2026-03-10T13:46:00.222299+0000 mgr.a (mgr.14150) 170 : cluster [DBG] pgmap v117: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:46:01.999 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:46:01 vm07 bash[23044]: audit 2026-03-10T13:46:00.266433+0000 mgr.a (mgr.14150) 171 : audit [DBG] from='client.24251 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "prometheus", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T13:46:01.999 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:46:01 vm07 bash[23044]: audit 2026-03-10T13:46:00.266433+0000 mgr.a (mgr.14150) 171 : audit [DBG] from='client.24251 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "prometheus", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T13:46:01.999 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:46:01 vm07 bash[23044]: cephadm 2026-03-10T13:46:00.267092+0000 mgr.a (mgr.14150) 172 : cephadm [INF] Saving service prometheus spec with placement count:1 2026-03-10T13:46:01.999 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:46:01 vm07 bash[23044]: cephadm 2026-03-10T13:46:00.267092+0000 mgr.a (mgr.14150) 172 : cephadm [INF] Saving service prometheus spec with placement count:1 2026-03-10T13:46:01.999 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:46:01 vm07 bash[23044]: cephadm 2026-03-10T13:46:00.690386+0000 mgr.a (mgr.14150) 173 : cephadm [INF] Deploying daemon node-exporter.vm07 on vm07 2026-03-10T13:46:01.999 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:46:01 vm07 bash[23044]: cephadm 2026-03-10T13:46:00.690386+0000 mgr.a (mgr.14150) 173 : cephadm [INF] Deploying daemon node-exporter.vm07 on vm07 2026-03-10T13:46:01.999 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:46:01 vm07 bash[23044]: audit 2026-03-10T13:46:01.376281+0000 mon.a (mon.0) 447 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:46:01.999 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:46:01 vm07 bash[23044]: audit 2026-03-10T13:46:01.376281+0000 mon.a (mon.0) 447 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:46:01.999 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:46:01 vm07 bash[23044]: audit 2026-03-10T13:46:01.381288+0000 mon.a (mon.0) 448 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:46:01.999 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:46:01 vm07 bash[23044]: audit 2026-03-10T13:46:01.381288+0000 mon.a (mon.0) 448 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:46:01.999 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:46:01 vm07 bash[23044]: audit 2026-03-10T13:46:01.384777+0000 mon.a (mon.0) 449 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:46:01.999 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:46:01 vm07 bash[23044]: audit 2026-03-10T13:46:01.384777+0000 mon.a (mon.0) 449 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:46:02.084 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:46:01 vm00 bash[20748]: audit 2026-03-10T13:46:00.098055+0000 mgr.a (mgr.14150) 168 : audit [DBG] from='client.24245 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "alertmanager", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T13:46:02.084 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:46:01 vm00 bash[20748]: audit 2026-03-10T13:46:00.098055+0000 mgr.a (mgr.14150) 168 : audit [DBG] from='client.24245 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "alertmanager", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T13:46:02.084 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:46:01 vm00 bash[20748]: cephadm 2026-03-10T13:46:00.098694+0000 mgr.a (mgr.14150) 169 : cephadm [INF] Saving service alertmanager spec with placement count:1 2026-03-10T13:46:02.085 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:46:01 vm00 bash[20748]: cephadm 2026-03-10T13:46:00.098694+0000 mgr.a (mgr.14150) 169 : cephadm [INF] Saving service alertmanager spec with placement count:1 2026-03-10T13:46:02.085 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:46:01 vm00 bash[20748]: cluster 2026-03-10T13:46:00.222299+0000 mgr.a (mgr.14150) 170 : cluster [DBG] pgmap v117: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:46:02.085 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:46:01 vm00 bash[20748]: cluster 2026-03-10T13:46:00.222299+0000 mgr.a (mgr.14150) 170 : cluster [DBG] pgmap v117: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:46:02.085 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:46:01 vm00 bash[20748]: audit 2026-03-10T13:46:00.266433+0000 mgr.a (mgr.14150) 171 : audit [DBG] from='client.24251 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "prometheus", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T13:46:02.085 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:46:01 vm00 bash[20748]: audit 2026-03-10T13:46:00.266433+0000 mgr.a (mgr.14150) 171 : audit [DBG] from='client.24251 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "prometheus", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T13:46:02.085 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:46:01 vm00 bash[20748]: cephadm 2026-03-10T13:46:00.267092+0000 mgr.a (mgr.14150) 172 : cephadm [INF] Saving service prometheus spec with placement count:1 2026-03-10T13:46:02.085 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:46:01 vm00 bash[20748]: cephadm 2026-03-10T13:46:00.267092+0000 mgr.a (mgr.14150) 172 : cephadm [INF] Saving service prometheus spec with placement count:1 2026-03-10T13:46:02.085 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:46:01 vm00 bash[20748]: cephadm 2026-03-10T13:46:00.690386+0000 mgr.a (mgr.14150) 173 : cephadm [INF] Deploying daemon node-exporter.vm07 on vm07 2026-03-10T13:46:02.085 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:46:01 vm00 bash[20748]: cephadm 2026-03-10T13:46:00.690386+0000 mgr.a (mgr.14150) 173 : cephadm [INF] Deploying daemon node-exporter.vm07 on vm07 2026-03-10T13:46:02.085 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:46:01 vm00 bash[20748]: audit 2026-03-10T13:46:01.376281+0000 mon.a (mon.0) 447 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:46:02.085 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:46:01 vm00 bash[20748]: audit 2026-03-10T13:46:01.376281+0000 mon.a (mon.0) 447 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:46:02.085 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:46:01 vm00 bash[20748]: audit 2026-03-10T13:46:01.381288+0000 mon.a (mon.0) 448 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:46:02.085 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:46:01 vm00 bash[20748]: audit 2026-03-10T13:46:01.381288+0000 mon.a (mon.0) 448 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:46:02.085 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:46:01 vm00 bash[20748]: audit 2026-03-10T13:46:01.384777+0000 mon.a (mon.0) 449 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:46:02.085 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:46:01 vm00 bash[20748]: audit 2026-03-10T13:46:01.384777+0000 mon.a (mon.0) 449 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:46:02.337 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:46:01 vm08 systemd[1]: /etc/systemd/system/ceph-c9620084-1c86-11f1-bcc5-e3fb709eab0a@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T13:46:02.337 INFO:journalctl@ceph.osd.2.vm08.stdout:Mar 10 13:46:01 vm08 systemd[1]: /etc/systemd/system/ceph-c9620084-1c86-11f1-bcc5-e3fb709eab0a@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T13:46:02.999 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:46:02 vm07 bash[23044]: cephadm 2026-03-10T13:46:01.385237+0000 mgr.a (mgr.14150) 174 : cephadm [INF] Deploying daemon node-exporter.vm08 on vm08 2026-03-10T13:46:02.999 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:46:02 vm07 bash[23044]: cephadm 2026-03-10T13:46:01.385237+0000 mgr.a (mgr.14150) 174 : cephadm [INF] Deploying daemon node-exporter.vm08 on vm08 2026-03-10T13:46:02.999 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:46:02 vm07 bash[23044]: audit 2026-03-10T13:46:02.073161+0000 mon.a (mon.0) 450 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:46:02.999 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:46:02 vm07 bash[23044]: audit 2026-03-10T13:46:02.073161+0000 mon.a (mon.0) 450 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:46:02.999 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:46:02 vm07 bash[23044]: audit 2026-03-10T13:46:02.077023+0000 mon.a (mon.0) 451 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:46:02.999 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:46:02 vm07 bash[23044]: audit 2026-03-10T13:46:02.077023+0000 mon.a (mon.0) 451 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:46:02.999 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:46:02 vm07 bash[23044]: audit 2026-03-10T13:46:02.080805+0000 mon.a (mon.0) 452 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:46:02.999 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:46:02 vm07 bash[23044]: audit 2026-03-10T13:46:02.080805+0000 mon.a (mon.0) 452 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:46:02.999 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:46:02 vm07 bash[23044]: audit 2026-03-10T13:46:02.084160+0000 mon.a (mon.0) 453 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:46:02.999 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:46:02 vm07 bash[23044]: audit 2026-03-10T13:46:02.084160+0000 mon.a (mon.0) 453 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:46:02.999 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:46:02 vm07 bash[23044]: audit 2026-03-10T13:46:02.158067+0000 mon.a (mon.0) 454 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:46:02.999 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:46:02 vm07 bash[23044]: audit 2026-03-10T13:46:02.158067+0000 mon.a (mon.0) 454 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:46:02.999 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:46:02 vm07 bash[23044]: audit 2026-03-10T13:46:02.161600+0000 mon.a (mon.0) 455 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:46:02.999 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:46:02 vm07 bash[23044]: audit 2026-03-10T13:46:02.161600+0000 mon.a (mon.0) 455 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:46:02.999 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:46:02 vm07 bash[23044]: audit 2026-03-10T13:46:02.164398+0000 mon.a (mon.0) 456 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "dashboard set-grafana-api-ssl-verify", "value": "false"}]: dispatch 2026-03-10T13:46:02.999 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:46:02 vm07 bash[23044]: audit 2026-03-10T13:46:02.164398+0000 mon.a (mon.0) 456 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "dashboard set-grafana-api-ssl-verify", "value": "false"}]: dispatch 2026-03-10T13:46:02.999 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:46:02 vm07 bash[23044]: audit 2026-03-10T13:46:02.168635+0000 mon.a (mon.0) 457 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:46:02.999 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:46:02 vm07 bash[23044]: audit 2026-03-10T13:46:02.168635+0000 mon.a (mon.0) 457 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:46:02.999 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:46:02 vm07 bash[23044]: audit 2026-03-10T13:46:02.513264+0000 mon.a (mon.0) 458 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:46:02.999 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:46:02 vm07 bash[23044]: audit 2026-03-10T13:46:02.513264+0000 mon.a (mon.0) 458 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:46:03.061 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:46:02 vm00 bash[20748]: cephadm 2026-03-10T13:46:01.385237+0000 mgr.a (mgr.14150) 174 : cephadm [INF] Deploying daemon node-exporter.vm08 on vm08 2026-03-10T13:46:03.061 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:46:02 vm00 bash[20748]: cephadm 2026-03-10T13:46:01.385237+0000 mgr.a (mgr.14150) 174 : cephadm [INF] Deploying daemon node-exporter.vm08 on vm08 2026-03-10T13:46:03.061 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:46:02 vm00 bash[20748]: audit 2026-03-10T13:46:02.073161+0000 mon.a (mon.0) 450 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:46:03.061 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:46:02 vm00 bash[20748]: audit 2026-03-10T13:46:02.073161+0000 mon.a (mon.0) 450 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:46:03.061 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:46:02 vm00 bash[20748]: audit 2026-03-10T13:46:02.077023+0000 mon.a (mon.0) 451 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:46:03.061 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:46:02 vm00 bash[20748]: audit 2026-03-10T13:46:02.077023+0000 mon.a (mon.0) 451 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:46:03.061 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:46:02 vm00 bash[20748]: audit 2026-03-10T13:46:02.080805+0000 mon.a (mon.0) 452 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:46:03.061 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:46:02 vm00 bash[20748]: audit 2026-03-10T13:46:02.080805+0000 mon.a (mon.0) 452 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:46:03.061 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:46:02 vm00 bash[20748]: audit 2026-03-10T13:46:02.084160+0000 mon.a (mon.0) 453 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:46:03.061 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:46:02 vm00 bash[20748]: audit 2026-03-10T13:46:02.084160+0000 mon.a (mon.0) 453 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:46:03.061 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:46:02 vm00 bash[20748]: audit 2026-03-10T13:46:02.158067+0000 mon.a (mon.0) 454 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:46:03.061 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:46:02 vm00 bash[20748]: audit 2026-03-10T13:46:02.158067+0000 mon.a (mon.0) 454 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:46:03.061 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:46:02 vm00 bash[20748]: audit 2026-03-10T13:46:02.161600+0000 mon.a (mon.0) 455 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:46:03.061 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:46:02 vm00 bash[20748]: audit 2026-03-10T13:46:02.161600+0000 mon.a (mon.0) 455 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:46:03.061 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:46:02 vm00 bash[20748]: audit 2026-03-10T13:46:02.164398+0000 mon.a (mon.0) 456 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "dashboard set-grafana-api-ssl-verify", "value": "false"}]: dispatch 2026-03-10T13:46:03.061 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:46:02 vm00 bash[20748]: audit 2026-03-10T13:46:02.164398+0000 mon.a (mon.0) 456 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "dashboard set-grafana-api-ssl-verify", "value": "false"}]: dispatch 2026-03-10T13:46:03.061 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:46:02 vm00 bash[20748]: audit 2026-03-10T13:46:02.168635+0000 mon.a (mon.0) 457 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:46:03.061 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:46:02 vm00 bash[20748]: audit 2026-03-10T13:46:02.168635+0000 mon.a (mon.0) 457 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:46:03.061 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:46:02 vm00 bash[20748]: audit 2026-03-10T13:46:02.513264+0000 mon.a (mon.0) 458 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:46:03.061 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:46:02 vm00 bash[20748]: audit 2026-03-10T13:46:02.513264+0000 mon.a (mon.0) 458 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:46:03.088 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:46:02 vm08 bash[23387]: cephadm 2026-03-10T13:46:01.385237+0000 mgr.a (mgr.14150) 174 : cephadm [INF] Deploying daemon node-exporter.vm08 on vm08 2026-03-10T13:46:03.088 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:46:02 vm08 bash[23387]: cephadm 2026-03-10T13:46:01.385237+0000 mgr.a (mgr.14150) 174 : cephadm [INF] Deploying daemon node-exporter.vm08 on vm08 2026-03-10T13:46:03.088 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:46:02 vm08 bash[23387]: audit 2026-03-10T13:46:02.073161+0000 mon.a (mon.0) 450 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:46:03.088 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:46:02 vm08 bash[23387]: audit 2026-03-10T13:46:02.073161+0000 mon.a (mon.0) 450 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:46:03.088 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:46:02 vm08 bash[23387]: audit 2026-03-10T13:46:02.077023+0000 mon.a (mon.0) 451 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:46:03.088 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:46:02 vm08 bash[23387]: audit 2026-03-10T13:46:02.077023+0000 mon.a (mon.0) 451 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:46:03.088 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:46:02 vm08 bash[23387]: audit 2026-03-10T13:46:02.080805+0000 mon.a (mon.0) 452 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:46:03.088 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:46:02 vm08 bash[23387]: audit 2026-03-10T13:46:02.080805+0000 mon.a (mon.0) 452 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:46:03.088 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:46:02 vm08 bash[23387]: audit 2026-03-10T13:46:02.084160+0000 mon.a (mon.0) 453 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:46:03.088 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:46:02 vm08 bash[23387]: audit 2026-03-10T13:46:02.084160+0000 mon.a (mon.0) 453 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:46:03.088 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:46:02 vm08 bash[23387]: audit 2026-03-10T13:46:02.158067+0000 mon.a (mon.0) 454 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:46:03.088 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:46:02 vm08 bash[23387]: audit 2026-03-10T13:46:02.158067+0000 mon.a (mon.0) 454 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:46:03.088 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:46:02 vm08 bash[23387]: audit 2026-03-10T13:46:02.161600+0000 mon.a (mon.0) 455 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:46:03.088 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:46:02 vm08 bash[23387]: audit 2026-03-10T13:46:02.161600+0000 mon.a (mon.0) 455 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:46:03.088 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:46:02 vm08 bash[23387]: audit 2026-03-10T13:46:02.164398+0000 mon.a (mon.0) 456 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "dashboard set-grafana-api-ssl-verify", "value": "false"}]: dispatch 2026-03-10T13:46:03.088 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:46:02 vm08 bash[23387]: audit 2026-03-10T13:46:02.164398+0000 mon.a (mon.0) 456 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "dashboard set-grafana-api-ssl-verify", "value": "false"}]: dispatch 2026-03-10T13:46:03.088 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:46:02 vm08 bash[23387]: audit 2026-03-10T13:46:02.168635+0000 mon.a (mon.0) 457 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:46:03.088 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:46:02 vm08 bash[23387]: audit 2026-03-10T13:46:02.168635+0000 mon.a (mon.0) 457 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:46:03.088 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:46:02 vm08 bash[23387]: audit 2026-03-10T13:46:02.513264+0000 mon.a (mon.0) 458 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:46:03.088 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:46:02 vm08 bash[23387]: audit 2026-03-10T13:46:02.513264+0000 mon.a (mon.0) 458 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:46:03.837 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:46:03 vm08 bash[23387]: cephadm 2026-03-10T13:46:02.091512+0000 mgr.a (mgr.14150) 175 : cephadm [INF] Regenerating cephadm self-signed grafana TLS certificates 2026-03-10T13:46:03.837 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:46:03 vm08 bash[23387]: cephadm 2026-03-10T13:46:02.091512+0000 mgr.a (mgr.14150) 175 : cephadm [INF] Regenerating cephadm self-signed grafana TLS certificates 2026-03-10T13:46:03.837 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:46:03 vm08 bash[23387]: audit 2026-03-10T13:46:02.164683+0000 mgr.a (mgr.14150) 176 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard set-grafana-api-ssl-verify", "value": "false"}]: dispatch 2026-03-10T13:46:03.837 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:46:03 vm08 bash[23387]: audit 2026-03-10T13:46:02.164683+0000 mgr.a (mgr.14150) 176 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard set-grafana-api-ssl-verify", "value": "false"}]: dispatch 2026-03-10T13:46:03.837 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:46:03 vm08 bash[23387]: cephadm 2026-03-10T13:46:02.175431+0000 mgr.a (mgr.14150) 177 : cephadm [INF] Deploying daemon grafana.vm00 on vm00 2026-03-10T13:46:03.838 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:46:03 vm08 bash[23387]: cephadm 2026-03-10T13:46:02.175431+0000 mgr.a (mgr.14150) 177 : cephadm [INF] Deploying daemon grafana.vm00 on vm00 2026-03-10T13:46:03.838 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:46:03 vm08 bash[23387]: cluster 2026-03-10T13:46:02.222613+0000 mgr.a (mgr.14150) 178 : cluster [DBG] pgmap v118: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:46:03.838 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:46:03 vm08 bash[23387]: cluster 2026-03-10T13:46:02.222613+0000 mgr.a (mgr.14150) 178 : cluster [DBG] pgmap v118: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:46:03.999 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:46:03 vm07 bash[23044]: cephadm 2026-03-10T13:46:02.091512+0000 mgr.a (mgr.14150) 175 : cephadm [INF] Regenerating cephadm self-signed grafana TLS certificates 2026-03-10T13:46:03.999 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:46:03 vm07 bash[23044]: cephadm 2026-03-10T13:46:02.091512+0000 mgr.a (mgr.14150) 175 : cephadm [INF] Regenerating cephadm self-signed grafana TLS certificates 2026-03-10T13:46:03.999 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:46:03 vm07 bash[23044]: audit 2026-03-10T13:46:02.164683+0000 mgr.a (mgr.14150) 176 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard set-grafana-api-ssl-verify", "value": "false"}]: dispatch 2026-03-10T13:46:03.999 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:46:03 vm07 bash[23044]: audit 2026-03-10T13:46:02.164683+0000 mgr.a (mgr.14150) 176 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard set-grafana-api-ssl-verify", "value": "false"}]: dispatch 2026-03-10T13:46:03.999 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:46:03 vm07 bash[23044]: cephadm 2026-03-10T13:46:02.175431+0000 mgr.a (mgr.14150) 177 : cephadm [INF] Deploying daemon grafana.vm00 on vm00 2026-03-10T13:46:03.999 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:46:03 vm07 bash[23044]: cephadm 2026-03-10T13:46:02.175431+0000 mgr.a (mgr.14150) 177 : cephadm [INF] Deploying daemon grafana.vm00 on vm00 2026-03-10T13:46:03.999 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:46:03 vm07 bash[23044]: cluster 2026-03-10T13:46:02.222613+0000 mgr.a (mgr.14150) 178 : cluster [DBG] pgmap v118: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:46:03.999 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:46:03 vm07 bash[23044]: cluster 2026-03-10T13:46:02.222613+0000 mgr.a (mgr.14150) 178 : cluster [DBG] pgmap v118: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:46:04.217 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:46:03 vm00 bash[20748]: cephadm 2026-03-10T13:46:02.091512+0000 mgr.a (mgr.14150) 175 : cephadm [INF] Regenerating cephadm self-signed grafana TLS certificates 2026-03-10T13:46:04.217 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:46:03 vm00 bash[20748]: cephadm 2026-03-10T13:46:02.091512+0000 mgr.a (mgr.14150) 175 : cephadm [INF] Regenerating cephadm self-signed grafana TLS certificates 2026-03-10T13:46:04.217 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:46:03 vm00 bash[20748]: audit 2026-03-10T13:46:02.164683+0000 mgr.a (mgr.14150) 176 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard set-grafana-api-ssl-verify", "value": "false"}]: dispatch 2026-03-10T13:46:04.217 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:46:03 vm00 bash[20748]: audit 2026-03-10T13:46:02.164683+0000 mgr.a (mgr.14150) 176 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard set-grafana-api-ssl-verify", "value": "false"}]: dispatch 2026-03-10T13:46:04.217 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:46:03 vm00 bash[20748]: cephadm 2026-03-10T13:46:02.175431+0000 mgr.a (mgr.14150) 177 : cephadm [INF] Deploying daemon grafana.vm00 on vm00 2026-03-10T13:46:04.217 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:46:03 vm00 bash[20748]: cephadm 2026-03-10T13:46:02.175431+0000 mgr.a (mgr.14150) 177 : cephadm [INF] Deploying daemon grafana.vm00 on vm00 2026-03-10T13:46:04.217 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:46:03 vm00 bash[20748]: cluster 2026-03-10T13:46:02.222613+0000 mgr.a (mgr.14150) 178 : cluster [DBG] pgmap v118: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:46:04.217 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:46:03 vm00 bash[20748]: cluster 2026-03-10T13:46:02.222613+0000 mgr.a (mgr.14150) 178 : cluster [DBG] pgmap v118: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:46:05.999 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:46:05 vm07 bash[23044]: cluster 2026-03-10T13:46:04.222920+0000 mgr.a (mgr.14150) 179 : cluster [DBG] pgmap v119: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:46:05.999 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:46:05 vm07 bash[23044]: cluster 2026-03-10T13:46:04.222920+0000 mgr.a (mgr.14150) 179 : cluster [DBG] pgmap v119: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:46:06.087 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:46:05 vm08 bash[23387]: cluster 2026-03-10T13:46:04.222920+0000 mgr.a (mgr.14150) 179 : cluster [DBG] pgmap v119: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:46:06.087 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:46:05 vm08 bash[23387]: cluster 2026-03-10T13:46:04.222920+0000 mgr.a (mgr.14150) 179 : cluster [DBG] pgmap v119: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:46:06.216 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:46:05 vm00 bash[20748]: cluster 2026-03-10T13:46:04.222920+0000 mgr.a (mgr.14150) 179 : cluster [DBG] pgmap v119: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:46:06.216 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:46:05 vm00 bash[20748]: cluster 2026-03-10T13:46:04.222920+0000 mgr.a (mgr.14150) 179 : cluster [DBG] pgmap v119: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:46:07.999 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:46:07 vm07 bash[23044]: cluster 2026-03-10T13:46:06.223145+0000 mgr.a (mgr.14150) 180 : cluster [DBG] pgmap v120: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:46:07.999 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:46:07 vm07 bash[23044]: cluster 2026-03-10T13:46:06.223145+0000 mgr.a (mgr.14150) 180 : cluster [DBG] pgmap v120: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:46:08.088 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:46:07 vm08 bash[23387]: cluster 2026-03-10T13:46:06.223145+0000 mgr.a (mgr.14150) 180 : cluster [DBG] pgmap v120: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:46:08.088 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:46:07 vm08 bash[23387]: cluster 2026-03-10T13:46:06.223145+0000 mgr.a (mgr.14150) 180 : cluster [DBG] pgmap v120: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:46:08.216 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:46:07 vm00 bash[20748]: cluster 2026-03-10T13:46:06.223145+0000 mgr.a (mgr.14150) 180 : cluster [DBG] pgmap v120: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:46:08.217 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:46:07 vm00 bash[20748]: cluster 2026-03-10T13:46:06.223145+0000 mgr.a (mgr.14150) 180 : cluster [DBG] pgmap v120: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:46:10.088 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:46:09 vm08 bash[23387]: cluster 2026-03-10T13:46:08.223395+0000 mgr.a (mgr.14150) 181 : cluster [DBG] pgmap v121: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:46:10.088 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:46:09 vm08 bash[23387]: cluster 2026-03-10T13:46:08.223395+0000 mgr.a (mgr.14150) 181 : cluster [DBG] pgmap v121: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:46:10.216 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:46:09 vm00 bash[20748]: cluster 2026-03-10T13:46:08.223395+0000 mgr.a (mgr.14150) 181 : cluster [DBG] pgmap v121: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:46:10.216 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:46:09 vm00 bash[20748]: cluster 2026-03-10T13:46:08.223395+0000 mgr.a (mgr.14150) 181 : cluster [DBG] pgmap v121: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:46:10.249 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:46:09 vm07 bash[23044]: cluster 2026-03-10T13:46:08.223395+0000 mgr.a (mgr.14150) 181 : cluster [DBG] pgmap v121: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:46:10.249 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:46:09 vm07 bash[23044]: cluster 2026-03-10T13:46:08.223395+0000 mgr.a (mgr.14150) 181 : cluster [DBG] pgmap v121: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:46:11.644 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:46:11 vm00 bash[20748]: cluster 2026-03-10T13:46:10.223644+0000 mgr.a (mgr.14150) 182 : cluster [DBG] pgmap v122: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:46:11.645 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:46:11 vm00 bash[20748]: cluster 2026-03-10T13:46:10.223644+0000 mgr.a (mgr.14150) 182 : cluster [DBG] pgmap v122: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:46:11.749 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:46:11 vm07 bash[23044]: cluster 2026-03-10T13:46:10.223644+0000 mgr.a (mgr.14150) 182 : cluster [DBG] pgmap v122: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:46:11.749 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:46:11 vm07 bash[23044]: cluster 2026-03-10T13:46:10.223644+0000 mgr.a (mgr.14150) 182 : cluster [DBG] pgmap v122: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:46:11.838 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:46:11 vm08 bash[23387]: cluster 2026-03-10T13:46:10.223644+0000 mgr.a (mgr.14150) 182 : cluster [DBG] pgmap v122: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:46:11.838 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:46:11 vm08 bash[23387]: cluster 2026-03-10T13:46:10.223644+0000 mgr.a (mgr.14150) 182 : cluster [DBG] pgmap v122: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:46:11.930 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:46:11 vm00 systemd[1]: /etc/systemd/system/ceph-c9620084-1c86-11f1-bcc5-e3fb709eab0a@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T13:46:11.930 INFO:journalctl@ceph.mgr.a.vm00.stdout:Mar 10 13:46:11 vm00 systemd[1]: /etc/systemd/system/ceph-c9620084-1c86-11f1-bcc5-e3fb709eab0a@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T13:46:11.930 INFO:journalctl@ceph.osd.0.vm00.stdout:Mar 10 13:46:11 vm00 systemd[1]: /etc/systemd/system/ceph-c9620084-1c86-11f1-bcc5-e3fb709eab0a@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T13:46:12.218 INFO:journalctl@ceph.osd.0.vm00.stdout:Mar 10 13:46:12 vm00 systemd[1]: /etc/systemd/system/ceph-c9620084-1c86-11f1-bcc5-e3fb709eab0a@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T13:46:12.218 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:46:12 vm00 systemd[1]: /etc/systemd/system/ceph-c9620084-1c86-11f1-bcc5-e3fb709eab0a@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T13:46:12.218 INFO:journalctl@ceph.mgr.a.vm00.stdout:Mar 10 13:46:12 vm00 systemd[1]: /etc/systemd/system/ceph-c9620084-1c86-11f1-bcc5-e3fb709eab0a@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T13:46:13.250 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:46:13 vm00 bash[20748]: audit 2026-03-10T13:46:12.154489+0000 mon.a (mon.0) 459 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:46:13.250 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:46:13 vm00 bash[20748]: audit 2026-03-10T13:46:12.154489+0000 mon.a (mon.0) 459 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:46:13.250 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:46:13 vm00 bash[20748]: audit 2026-03-10T13:46:12.159813+0000 mon.a (mon.0) 460 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:46:13.250 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:46:13 vm00 bash[20748]: audit 2026-03-10T13:46:12.159813+0000 mon.a (mon.0) 460 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:46:13.250 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:46:13 vm00 bash[20748]: audit 2026-03-10T13:46:12.164878+0000 mon.a (mon.0) 461 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:46:13.250 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:46:13 vm00 bash[20748]: audit 2026-03-10T13:46:12.164878+0000 mon.a (mon.0) 461 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:46:13.250 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:46:13 vm00 bash[20748]: audit 2026-03-10T13:46:12.168920+0000 mon.a (mon.0) 462 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:46:13.250 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:46:13 vm00 bash[20748]: audit 2026-03-10T13:46:12.168920+0000 mon.a (mon.0) 462 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:46:13.250 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:46:13 vm00 bash[20748]: audit 2026-03-10T13:46:12.191678+0000 mon.a (mon.0) 463 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T13:46:13.250 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:46:13 vm00 bash[20748]: audit 2026-03-10T13:46:12.191678+0000 mon.a (mon.0) 463 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T13:46:13.250 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:46:13 vm00 bash[20748]: cluster 2026-03-10T13:46:12.227421+0000 mgr.a (mgr.14150) 183 : cluster [DBG] pgmap v123: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:46:13.250 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:46:13 vm00 bash[20748]: cluster 2026-03-10T13:46:12.227421+0000 mgr.a (mgr.14150) 183 : cluster [DBG] pgmap v123: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:46:13.250 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:46:13 vm00 bash[20748]: audit 2026-03-10T13:46:12.518709+0000 mon.a (mon.0) 464 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:46:13.250 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:46:13 vm00 bash[20748]: audit 2026-03-10T13:46:12.518709+0000 mon.a (mon.0) 464 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:46:13.498 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:46:13 vm07 bash[23044]: audit 2026-03-10T13:46:12.154489+0000 mon.a (mon.0) 459 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:46:13.499 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:46:13 vm07 bash[23044]: audit 2026-03-10T13:46:12.154489+0000 mon.a (mon.0) 459 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:46:13.499 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:46:13 vm07 bash[23044]: audit 2026-03-10T13:46:12.159813+0000 mon.a (mon.0) 460 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:46:13.499 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:46:13 vm07 bash[23044]: audit 2026-03-10T13:46:12.159813+0000 mon.a (mon.0) 460 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:46:13.499 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:46:13 vm07 bash[23044]: audit 2026-03-10T13:46:12.164878+0000 mon.a (mon.0) 461 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:46:13.499 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:46:13 vm07 bash[23044]: audit 2026-03-10T13:46:12.164878+0000 mon.a (mon.0) 461 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:46:13.499 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:46:13 vm07 bash[23044]: audit 2026-03-10T13:46:12.168920+0000 mon.a (mon.0) 462 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:46:13.499 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:46:13 vm07 bash[23044]: audit 2026-03-10T13:46:12.168920+0000 mon.a (mon.0) 462 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:46:13.499 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:46:13 vm07 bash[23044]: audit 2026-03-10T13:46:12.191678+0000 mon.a (mon.0) 463 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T13:46:13.499 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:46:13 vm07 bash[23044]: audit 2026-03-10T13:46:12.191678+0000 mon.a (mon.0) 463 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T13:46:13.499 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:46:13 vm07 bash[23044]: cluster 2026-03-10T13:46:12.227421+0000 mgr.a (mgr.14150) 183 : cluster [DBG] pgmap v123: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:46:13.499 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:46:13 vm07 bash[23044]: cluster 2026-03-10T13:46:12.227421+0000 mgr.a (mgr.14150) 183 : cluster [DBG] pgmap v123: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:46:13.499 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:46:13 vm07 bash[23044]: audit 2026-03-10T13:46:12.518709+0000 mon.a (mon.0) 464 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:46:13.499 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:46:13 vm07 bash[23044]: audit 2026-03-10T13:46:12.518709+0000 mon.a (mon.0) 464 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:46:13.588 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:46:13 vm08 bash[23387]: audit 2026-03-10T13:46:12.154489+0000 mon.a (mon.0) 459 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:46:13.588 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:46:13 vm08 bash[23387]: audit 2026-03-10T13:46:12.154489+0000 mon.a (mon.0) 459 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:46:13.588 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:46:13 vm08 bash[23387]: audit 2026-03-10T13:46:12.159813+0000 mon.a (mon.0) 460 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:46:13.588 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:46:13 vm08 bash[23387]: audit 2026-03-10T13:46:12.159813+0000 mon.a (mon.0) 460 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:46:13.588 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:46:13 vm08 bash[23387]: audit 2026-03-10T13:46:12.164878+0000 mon.a (mon.0) 461 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:46:13.588 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:46:13 vm08 bash[23387]: audit 2026-03-10T13:46:12.164878+0000 mon.a (mon.0) 461 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:46:13.588 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:46:13 vm08 bash[23387]: audit 2026-03-10T13:46:12.168920+0000 mon.a (mon.0) 462 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:46:13.588 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:46:13 vm08 bash[23387]: audit 2026-03-10T13:46:12.168920+0000 mon.a (mon.0) 462 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:46:13.588 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:46:13 vm08 bash[23387]: audit 2026-03-10T13:46:12.191678+0000 mon.a (mon.0) 463 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T13:46:13.588 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:46:13 vm08 bash[23387]: audit 2026-03-10T13:46:12.191678+0000 mon.a (mon.0) 463 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T13:46:13.588 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:46:13 vm08 bash[23387]: cluster 2026-03-10T13:46:12.227421+0000 mgr.a (mgr.14150) 183 : cluster [DBG] pgmap v123: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:46:13.588 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:46:13 vm08 bash[23387]: cluster 2026-03-10T13:46:12.227421+0000 mgr.a (mgr.14150) 183 : cluster [DBG] pgmap v123: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:46:13.588 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:46:13 vm08 bash[23387]: audit 2026-03-10T13:46:12.518709+0000 mon.a (mon.0) 464 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:46:13.588 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:46:13 vm08 bash[23387]: audit 2026-03-10T13:46:12.518709+0000 mon.a (mon.0) 464 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:46:15.588 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:46:15 vm08 bash[23387]: cluster 2026-03-10T13:46:14.227655+0000 mgr.a (mgr.14150) 184 : cluster [DBG] pgmap v124: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:46:15.588 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:46:15 vm08 bash[23387]: cluster 2026-03-10T13:46:14.227655+0000 mgr.a (mgr.14150) 184 : cluster [DBG] pgmap v124: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:46:15.717 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:46:15 vm00 bash[20748]: cluster 2026-03-10T13:46:14.227655+0000 mgr.a (mgr.14150) 184 : cluster [DBG] pgmap v124: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:46:15.717 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:46:15 vm00 bash[20748]: cluster 2026-03-10T13:46:14.227655+0000 mgr.a (mgr.14150) 184 : cluster [DBG] pgmap v124: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:46:15.748 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:46:15 vm07 bash[23044]: cluster 2026-03-10T13:46:14.227655+0000 mgr.a (mgr.14150) 184 : cluster [DBG] pgmap v124: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:46:15.748 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:46:15 vm07 bash[23044]: cluster 2026-03-10T13:46:14.227655+0000 mgr.a (mgr.14150) 184 : cluster [DBG] pgmap v124: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:46:17.588 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:46:17 vm08 bash[23387]: cluster 2026-03-10T13:46:16.227869+0000 mgr.a (mgr.14150) 185 : cluster [DBG] pgmap v125: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:46:17.588 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:46:17 vm08 bash[23387]: cluster 2026-03-10T13:46:16.227869+0000 mgr.a (mgr.14150) 185 : cluster [DBG] pgmap v125: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:46:17.588 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:46:17 vm08 bash[23387]: audit 2026-03-10T13:46:17.156002+0000 mon.a (mon.0) 465 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:46:17.588 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:46:17 vm08 bash[23387]: audit 2026-03-10T13:46:17.156002+0000 mon.a (mon.0) 465 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:46:17.588 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:46:17 vm08 bash[23387]: audit 2026-03-10T13:46:17.161103+0000 mon.a (mon.0) 466 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:46:17.588 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:46:17 vm08 bash[23387]: audit 2026-03-10T13:46:17.161103+0000 mon.a (mon.0) 466 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:46:17.588 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:46:17 vm08 bash[23387]: audit 2026-03-10T13:46:17.179238+0000 mon.a (mon.0) 467 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:46:17.588 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:46:17 vm08 bash[23387]: audit 2026-03-10T13:46:17.179238+0000 mon.a (mon.0) 467 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:46:17.588 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:46:17 vm08 bash[23387]: audit 2026-03-10T13:46:17.183564+0000 mon.a (mon.0) 468 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:46:17.588 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:46:17 vm08 bash[23387]: audit 2026-03-10T13:46:17.183564+0000 mon.a (mon.0) 468 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:46:17.717 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:46:17 vm00 bash[20748]: cluster 2026-03-10T13:46:16.227869+0000 mgr.a (mgr.14150) 185 : cluster [DBG] pgmap v125: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:46:17.717 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:46:17 vm00 bash[20748]: cluster 2026-03-10T13:46:16.227869+0000 mgr.a (mgr.14150) 185 : cluster [DBG] pgmap v125: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:46:17.717 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:46:17 vm00 bash[20748]: audit 2026-03-10T13:46:17.156002+0000 mon.a (mon.0) 465 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:46:17.717 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:46:17 vm00 bash[20748]: audit 2026-03-10T13:46:17.156002+0000 mon.a (mon.0) 465 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:46:17.717 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:46:17 vm00 bash[20748]: audit 2026-03-10T13:46:17.161103+0000 mon.a (mon.0) 466 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:46:17.717 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:46:17 vm00 bash[20748]: audit 2026-03-10T13:46:17.161103+0000 mon.a (mon.0) 466 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:46:17.717 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:46:17 vm00 bash[20748]: audit 2026-03-10T13:46:17.179238+0000 mon.a (mon.0) 467 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:46:17.717 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:46:17 vm00 bash[20748]: audit 2026-03-10T13:46:17.179238+0000 mon.a (mon.0) 467 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:46:17.717 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:46:17 vm00 bash[20748]: audit 2026-03-10T13:46:17.183564+0000 mon.a (mon.0) 468 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:46:17.717 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:46:17 vm00 bash[20748]: audit 2026-03-10T13:46:17.183564+0000 mon.a (mon.0) 468 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:46:17.749 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:46:17 vm07 bash[23044]: cluster 2026-03-10T13:46:16.227869+0000 mgr.a (mgr.14150) 185 : cluster [DBG] pgmap v125: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:46:17.749 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:46:17 vm07 bash[23044]: cluster 2026-03-10T13:46:16.227869+0000 mgr.a (mgr.14150) 185 : cluster [DBG] pgmap v125: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:46:17.749 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:46:17 vm07 bash[23044]: audit 2026-03-10T13:46:17.156002+0000 mon.a (mon.0) 465 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:46:17.749 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:46:17 vm07 bash[23044]: audit 2026-03-10T13:46:17.156002+0000 mon.a (mon.0) 465 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:46:17.749 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:46:17 vm07 bash[23044]: audit 2026-03-10T13:46:17.161103+0000 mon.a (mon.0) 466 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:46:17.749 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:46:17 vm07 bash[23044]: audit 2026-03-10T13:46:17.161103+0000 mon.a (mon.0) 466 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:46:17.749 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:46:17 vm07 bash[23044]: audit 2026-03-10T13:46:17.179238+0000 mon.a (mon.0) 467 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:46:17.749 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:46:17 vm07 bash[23044]: audit 2026-03-10T13:46:17.179238+0000 mon.a (mon.0) 467 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:46:17.749 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:46:17 vm07 bash[23044]: audit 2026-03-10T13:46:17.183564+0000 mon.a (mon.0) 468 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:46:17.749 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:46:17 vm07 bash[23044]: audit 2026-03-10T13:46:17.183564+0000 mon.a (mon.0) 468 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:46:18.717 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:46:18 vm00 bash[20748]: audit 2026-03-10T13:46:17.384929+0000 mon.a (mon.0) 469 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:46:18.717 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:46:18 vm00 bash[20748]: audit 2026-03-10T13:46:17.384929+0000 mon.a (mon.0) 469 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:46:18.717 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:46:18 vm00 bash[20748]: audit 2026-03-10T13:46:17.389366+0000 mon.a (mon.0) 470 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:46:18.717 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:46:18 vm00 bash[20748]: audit 2026-03-10T13:46:17.389366+0000 mon.a (mon.0) 470 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:46:18.717 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:46:18 vm00 bash[20748]: audit 2026-03-10T13:46:17.485915+0000 mon.a (mon.0) 471 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T13:46:18.717 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:46:18 vm00 bash[20748]: audit 2026-03-10T13:46:17.485915+0000 mon.a (mon.0) 471 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T13:46:18.717 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:46:18 vm00 bash[20748]: audit 2026-03-10T13:46:17.486375+0000 mon.a (mon.0) 472 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T13:46:18.717 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:46:18 vm00 bash[20748]: audit 2026-03-10T13:46:17.486375+0000 mon.a (mon.0) 472 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T13:46:18.717 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:46:18 vm00 bash[20748]: audit 2026-03-10T13:46:17.491183+0000 mon.a (mon.0) 473 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:46:18.717 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:46:18 vm00 bash[20748]: audit 2026-03-10T13:46:17.491183+0000 mon.a (mon.0) 473 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:46:18.717 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:46:18 vm00 bash[20748]: cephadm 2026-03-10T13:46:17.496295+0000 mgr.a (mgr.14150) 186 : cephadm [INF] Deploying daemon alertmanager.vm08 on vm08 2026-03-10T13:46:18.717 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:46:18 vm00 bash[20748]: cephadm 2026-03-10T13:46:17.496295+0000 mgr.a (mgr.14150) 186 : cephadm [INF] Deploying daemon alertmanager.vm08 on vm08 2026-03-10T13:46:18.748 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:46:18 vm07 bash[23044]: audit 2026-03-10T13:46:17.384929+0000 mon.a (mon.0) 469 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:46:18.749 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:46:18 vm07 bash[23044]: audit 2026-03-10T13:46:17.384929+0000 mon.a (mon.0) 469 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:46:18.749 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:46:18 vm07 bash[23044]: audit 2026-03-10T13:46:17.389366+0000 mon.a (mon.0) 470 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:46:18.749 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:46:18 vm07 bash[23044]: audit 2026-03-10T13:46:17.389366+0000 mon.a (mon.0) 470 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:46:18.749 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:46:18 vm07 bash[23044]: audit 2026-03-10T13:46:17.485915+0000 mon.a (mon.0) 471 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T13:46:18.749 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:46:18 vm07 bash[23044]: audit 2026-03-10T13:46:17.485915+0000 mon.a (mon.0) 471 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T13:46:18.749 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:46:18 vm07 bash[23044]: audit 2026-03-10T13:46:17.486375+0000 mon.a (mon.0) 472 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T13:46:18.749 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:46:18 vm07 bash[23044]: audit 2026-03-10T13:46:17.486375+0000 mon.a (mon.0) 472 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T13:46:18.749 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:46:18 vm07 bash[23044]: audit 2026-03-10T13:46:17.491183+0000 mon.a (mon.0) 473 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:46:18.749 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:46:18 vm07 bash[23044]: audit 2026-03-10T13:46:17.491183+0000 mon.a (mon.0) 473 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:46:18.749 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:46:18 vm07 bash[23044]: cephadm 2026-03-10T13:46:17.496295+0000 mgr.a (mgr.14150) 186 : cephadm [INF] Deploying daemon alertmanager.vm08 on vm08 2026-03-10T13:46:18.749 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:46:18 vm07 bash[23044]: cephadm 2026-03-10T13:46:17.496295+0000 mgr.a (mgr.14150) 186 : cephadm [INF] Deploying daemon alertmanager.vm08 on vm08 2026-03-10T13:46:18.838 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:46:18 vm08 bash[23387]: audit 2026-03-10T13:46:17.384929+0000 mon.a (mon.0) 469 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:46:18.838 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:46:18 vm08 bash[23387]: audit 2026-03-10T13:46:17.384929+0000 mon.a (mon.0) 469 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:46:18.838 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:46:18 vm08 bash[23387]: audit 2026-03-10T13:46:17.389366+0000 mon.a (mon.0) 470 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:46:18.838 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:46:18 vm08 bash[23387]: audit 2026-03-10T13:46:17.389366+0000 mon.a (mon.0) 470 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:46:18.838 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:46:18 vm08 bash[23387]: audit 2026-03-10T13:46:17.485915+0000 mon.a (mon.0) 471 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T13:46:18.838 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:46:18 vm08 bash[23387]: audit 2026-03-10T13:46:17.485915+0000 mon.a (mon.0) 471 : audit [DBG] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T13:46:18.838 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:46:18 vm08 bash[23387]: audit 2026-03-10T13:46:17.486375+0000 mon.a (mon.0) 472 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T13:46:18.838 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:46:18 vm08 bash[23387]: audit 2026-03-10T13:46:17.486375+0000 mon.a (mon.0) 472 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T13:46:18.838 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:46:18 vm08 bash[23387]: audit 2026-03-10T13:46:17.491183+0000 mon.a (mon.0) 473 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:46:18.838 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:46:18 vm08 bash[23387]: audit 2026-03-10T13:46:17.491183+0000 mon.a (mon.0) 473 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:46:18.838 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:46:18 vm08 bash[23387]: cephadm 2026-03-10T13:46:17.496295+0000 mgr.a (mgr.14150) 186 : cephadm [INF] Deploying daemon alertmanager.vm08 on vm08 2026-03-10T13:46:18.838 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:46:18 vm08 bash[23387]: cephadm 2026-03-10T13:46:17.496295+0000 mgr.a (mgr.14150) 186 : cephadm [INF] Deploying daemon alertmanager.vm08 on vm08 2026-03-10T13:46:19.748 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:46:19 vm07 bash[23044]: cluster 2026-03-10T13:46:18.228123+0000 mgr.a (mgr.14150) 187 : cluster [DBG] pgmap v126: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:46:19.748 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:46:19 vm07 bash[23044]: cluster 2026-03-10T13:46:18.228123+0000 mgr.a (mgr.14150) 187 : cluster [DBG] pgmap v126: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:46:19.838 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:46:19 vm08 bash[23387]: cluster 2026-03-10T13:46:18.228123+0000 mgr.a (mgr.14150) 187 : cluster [DBG] pgmap v126: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:46:19.838 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:46:19 vm08 bash[23387]: cluster 2026-03-10T13:46:18.228123+0000 mgr.a (mgr.14150) 187 : cluster [DBG] pgmap v126: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:46:19.967 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:46:19 vm00 bash[20748]: cluster 2026-03-10T13:46:18.228123+0000 mgr.a (mgr.14150) 187 : cluster [DBG] pgmap v126: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:46:19.967 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:46:19 vm00 bash[20748]: cluster 2026-03-10T13:46:18.228123+0000 mgr.a (mgr.14150) 187 : cluster [DBG] pgmap v126: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:46:21.500 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:46:21 vm08 bash[23387]: cluster 2026-03-10T13:46:20.228354+0000 mgr.a (mgr.14150) 188 : cluster [DBG] pgmap v127: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:46:21.500 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:46:21 vm08 bash[23387]: cluster 2026-03-10T13:46:20.228354+0000 mgr.a (mgr.14150) 188 : cluster [DBG] pgmap v127: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:46:21.787 INFO:journalctl@ceph.osd.2.vm08.stdout:Mar 10 13:46:21 vm08 systemd[1]: /etc/systemd/system/ceph-c9620084-1c86-11f1-bcc5-e3fb709eab0a@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T13:46:21.787 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:46:21 vm08 systemd[1]: /etc/systemd/system/ceph-c9620084-1c86-11f1-bcc5-e3fb709eab0a@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T13:46:21.967 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:46:21 vm00 bash[20748]: cluster 2026-03-10T13:46:20.228354+0000 mgr.a (mgr.14150) 188 : cluster [DBG] pgmap v127: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:46:21.967 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:46:21 vm00 bash[20748]: cluster 2026-03-10T13:46:20.228354+0000 mgr.a (mgr.14150) 188 : cluster [DBG] pgmap v127: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:46:21.998 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:46:21 vm07 bash[23044]: cluster 2026-03-10T13:46:20.228354+0000 mgr.a (mgr.14150) 188 : cluster [DBG] pgmap v127: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:46:21.998 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:46:21 vm07 bash[23044]: cluster 2026-03-10T13:46:20.228354+0000 mgr.a (mgr.14150) 188 : cluster [DBG] pgmap v127: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:46:22.059 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:46:21 vm08 systemd[1]: /etc/systemd/system/ceph-c9620084-1c86-11f1-bcc5-e3fb709eab0a@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T13:46:22.059 INFO:journalctl@ceph.osd.2.vm08.stdout:Mar 10 13:46:21 vm08 systemd[1]: /etc/systemd/system/ceph-c9620084-1c86-11f1-bcc5-e3fb709eab0a@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T13:46:23.338 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:46:23 vm08 bash[23387]: audit 2026-03-10T13:46:22.011651+0000 mon.a (mon.0) 474 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:46:23.338 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:46:23 vm08 bash[23387]: audit 2026-03-10T13:46:22.011651+0000 mon.a (mon.0) 474 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:46:23.338 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:46:23 vm08 bash[23387]: audit 2026-03-10T13:46:22.016560+0000 mon.a (mon.0) 475 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:46:23.338 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:46:23 vm08 bash[23387]: audit 2026-03-10T13:46:22.016560+0000 mon.a (mon.0) 475 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:46:23.338 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:46:23 vm08 bash[23387]: audit 2026-03-10T13:46:22.019801+0000 mon.a (mon.0) 476 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:46:23.338 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:46:23 vm08 bash[23387]: audit 2026-03-10T13:46:22.019801+0000 mon.a (mon.0) 476 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:46:23.338 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:46:23 vm08 bash[23387]: audit 2026-03-10T13:46:22.022607+0000 mon.a (mon.0) 477 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:46:23.338 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:46:23 vm08 bash[23387]: audit 2026-03-10T13:46:22.022607+0000 mon.a (mon.0) 477 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:46:23.338 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:46:23 vm08 bash[23387]: audit 2026-03-10T13:46:22.523604+0000 mon.a (mon.0) 478 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:46:23.338 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:46:23 vm08 bash[23387]: audit 2026-03-10T13:46:22.523604+0000 mon.a (mon.0) 478 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:46:23.467 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:46:23 vm00 bash[20748]: audit 2026-03-10T13:46:22.011651+0000 mon.a (mon.0) 474 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:46:23.467 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:46:23 vm00 bash[20748]: audit 2026-03-10T13:46:22.011651+0000 mon.a (mon.0) 474 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:46:23.467 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:46:23 vm00 bash[20748]: audit 2026-03-10T13:46:22.016560+0000 mon.a (mon.0) 475 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:46:23.467 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:46:23 vm00 bash[20748]: audit 2026-03-10T13:46:22.016560+0000 mon.a (mon.0) 475 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:46:23.467 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:46:23 vm00 bash[20748]: audit 2026-03-10T13:46:22.019801+0000 mon.a (mon.0) 476 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:46:23.467 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:46:23 vm00 bash[20748]: audit 2026-03-10T13:46:22.019801+0000 mon.a (mon.0) 476 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:46:23.467 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:46:23 vm00 bash[20748]: audit 2026-03-10T13:46:22.022607+0000 mon.a (mon.0) 477 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:46:23.467 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:46:23 vm00 bash[20748]: audit 2026-03-10T13:46:22.022607+0000 mon.a (mon.0) 477 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:46:23.467 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:46:23 vm00 bash[20748]: audit 2026-03-10T13:46:22.523604+0000 mon.a (mon.0) 478 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:46:23.467 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:46:23 vm00 bash[20748]: audit 2026-03-10T13:46:22.523604+0000 mon.a (mon.0) 478 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:46:23.498 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:46:23 vm07 bash[23044]: audit 2026-03-10T13:46:22.011651+0000 mon.a (mon.0) 474 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:46:23.499 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:46:23 vm07 bash[23044]: audit 2026-03-10T13:46:22.011651+0000 mon.a (mon.0) 474 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:46:23.499 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:46:23 vm07 bash[23044]: audit 2026-03-10T13:46:22.016560+0000 mon.a (mon.0) 475 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:46:23.499 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:46:23 vm07 bash[23044]: audit 2026-03-10T13:46:22.016560+0000 mon.a (mon.0) 475 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:46:23.499 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:46:23 vm07 bash[23044]: audit 2026-03-10T13:46:22.019801+0000 mon.a (mon.0) 476 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:46:23.499 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:46:23 vm07 bash[23044]: audit 2026-03-10T13:46:22.019801+0000 mon.a (mon.0) 476 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:46:23.499 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:46:23 vm07 bash[23044]: audit 2026-03-10T13:46:22.022607+0000 mon.a (mon.0) 477 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:46:23.499 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:46:23 vm07 bash[23044]: audit 2026-03-10T13:46:22.022607+0000 mon.a (mon.0) 477 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:46:23.499 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:46:23 vm07 bash[23044]: audit 2026-03-10T13:46:22.523604+0000 mon.a (mon.0) 478 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:46:23.499 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:46:23 vm07 bash[23044]: audit 2026-03-10T13:46:22.523604+0000 mon.a (mon.0) 478 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:46:24.338 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:46:24 vm08 bash[23387]: cephadm 2026-03-10T13:46:22.174349+0000 mgr.a (mgr.14150) 189 : cephadm [INF] Deploying daemon prometheus.vm07 on vm07 2026-03-10T13:46:24.338 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:46:24 vm08 bash[23387]: cephadm 2026-03-10T13:46:22.174349+0000 mgr.a (mgr.14150) 189 : cephadm [INF] Deploying daemon prometheus.vm07 on vm07 2026-03-10T13:46:24.338 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:46:24 vm08 bash[23387]: cluster 2026-03-10T13:46:22.228581+0000 mgr.a (mgr.14150) 190 : cluster [DBG] pgmap v128: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:46:24.338 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:46:24 vm08 bash[23387]: cluster 2026-03-10T13:46:22.228581+0000 mgr.a (mgr.14150) 190 : cluster [DBG] pgmap v128: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:46:24.467 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:46:24 vm00 bash[20748]: cephadm 2026-03-10T13:46:22.174349+0000 mgr.a (mgr.14150) 189 : cephadm [INF] Deploying daemon prometheus.vm07 on vm07 2026-03-10T13:46:24.467 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:46:24 vm00 bash[20748]: cephadm 2026-03-10T13:46:22.174349+0000 mgr.a (mgr.14150) 189 : cephadm [INF] Deploying daemon prometheus.vm07 on vm07 2026-03-10T13:46:24.467 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:46:24 vm00 bash[20748]: cluster 2026-03-10T13:46:22.228581+0000 mgr.a (mgr.14150) 190 : cluster [DBG] pgmap v128: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:46:24.467 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:46:24 vm00 bash[20748]: cluster 2026-03-10T13:46:22.228581+0000 mgr.a (mgr.14150) 190 : cluster [DBG] pgmap v128: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:46:24.498 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:46:24 vm07 bash[23044]: cephadm 2026-03-10T13:46:22.174349+0000 mgr.a (mgr.14150) 189 : cephadm [INF] Deploying daemon prometheus.vm07 on vm07 2026-03-10T13:46:24.498 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:46:24 vm07 bash[23044]: cephadm 2026-03-10T13:46:22.174349+0000 mgr.a (mgr.14150) 189 : cephadm [INF] Deploying daemon prometheus.vm07 on vm07 2026-03-10T13:46:24.498 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:46:24 vm07 bash[23044]: cluster 2026-03-10T13:46:22.228581+0000 mgr.a (mgr.14150) 190 : cluster [DBG] pgmap v128: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:46:24.498 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:46:24 vm07 bash[23044]: cluster 2026-03-10T13:46:22.228581+0000 mgr.a (mgr.14150) 190 : cluster [DBG] pgmap v128: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:46:25.467 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:46:25 vm00 bash[20748]: cluster 2026-03-10T13:46:24.228820+0000 mgr.a (mgr.14150) 191 : cluster [DBG] pgmap v129: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:46:25.467 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:46:25 vm00 bash[20748]: cluster 2026-03-10T13:46:24.228820+0000 mgr.a (mgr.14150) 191 : cluster [DBG] pgmap v129: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:46:25.498 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:46:25 vm07 bash[23044]: cluster 2026-03-10T13:46:24.228820+0000 mgr.a (mgr.14150) 191 : cluster [DBG] pgmap v129: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:46:25.498 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:46:25 vm07 bash[23044]: cluster 2026-03-10T13:46:24.228820+0000 mgr.a (mgr.14150) 191 : cluster [DBG] pgmap v129: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:46:25.588 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:46:25 vm08 bash[23387]: cluster 2026-03-10T13:46:24.228820+0000 mgr.a (mgr.14150) 191 : cluster [DBG] pgmap v129: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:46:25.588 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:46:25 vm08 bash[23387]: cluster 2026-03-10T13:46:24.228820+0000 mgr.a (mgr.14150) 191 : cluster [DBG] pgmap v129: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:46:27.544 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:46:27 vm07 bash[23044]: cluster 2026-03-10T13:46:26.229093+0000 mgr.a (mgr.14150) 192 : cluster [DBG] pgmap v130: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:46:27.544 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:46:27 vm07 bash[23044]: cluster 2026-03-10T13:46:26.229093+0000 mgr.a (mgr.14150) 192 : cluster [DBG] pgmap v130: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:46:27.588 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:46:27 vm08 bash[23387]: cluster 2026-03-10T13:46:26.229093+0000 mgr.a (mgr.14150) 192 : cluster [DBG] pgmap v130: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:46:27.588 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:46:27 vm08 bash[23387]: cluster 2026-03-10T13:46:26.229093+0000 mgr.a (mgr.14150) 192 : cluster [DBG] pgmap v130: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:46:27.717 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:46:27 vm00 bash[20748]: cluster 2026-03-10T13:46:26.229093+0000 mgr.a (mgr.14150) 192 : cluster [DBG] pgmap v130: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:46:27.717 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:46:27 vm00 bash[20748]: cluster 2026-03-10T13:46:26.229093+0000 mgr.a (mgr.14150) 192 : cluster [DBG] pgmap v130: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:46:28.599 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:46:28 vm07 systemd[1]: /etc/systemd/system/ceph-c9620084-1c86-11f1-bcc5-e3fb709eab0a@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T13:46:28.599 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:46:28 vm07 systemd[1]: /etc/systemd/system/ceph-c9620084-1c86-11f1-bcc5-e3fb709eab0a@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T13:46:28.599 INFO:journalctl@ceph.mgr.b.vm07.stdout:Mar 10 13:46:28 vm07 systemd[1]: /etc/systemd/system/ceph-c9620084-1c86-11f1-bcc5-e3fb709eab0a@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T13:46:28.599 INFO:journalctl@ceph.mgr.b.vm07.stdout:Mar 10 13:46:28 vm07 systemd[1]: /etc/systemd/system/ceph-c9620084-1c86-11f1-bcc5-e3fb709eab0a@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T13:46:28.599 INFO:journalctl@ceph.osd.1.vm07.stdout:Mar 10 13:46:28 vm07 systemd[1]: /etc/systemd/system/ceph-c9620084-1c86-11f1-bcc5-e3fb709eab0a@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T13:46:28.599 INFO:journalctl@ceph.osd.1.vm07.stdout:Mar 10 13:46:28 vm07 systemd[1]: /etc/systemd/system/ceph-c9620084-1c86-11f1-bcc5-e3fb709eab0a@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T13:46:29.838 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:46:29 vm08 bash[23387]: cluster 2026-03-10T13:46:28.229328+0000 mgr.a (mgr.14150) 193 : cluster [DBG] pgmap v131: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:46:29.838 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:46:29 vm08 bash[23387]: cluster 2026-03-10T13:46:28.229328+0000 mgr.a (mgr.14150) 193 : cluster [DBG] pgmap v131: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:46:29.838 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:46:29 vm08 bash[23387]: audit 2026-03-10T13:46:28.584743+0000 mon.a (mon.0) 479 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:46:29.838 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:46:29 vm08 bash[23387]: audit 2026-03-10T13:46:28.584743+0000 mon.a (mon.0) 479 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:46:29.838 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:46:29 vm08 bash[23387]: audit 2026-03-10T13:46:28.589958+0000 mon.a (mon.0) 480 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:46:29.838 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:46:29 vm08 bash[23387]: audit 2026-03-10T13:46:28.589958+0000 mon.a (mon.0) 480 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:46:29.838 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:46:29 vm08 bash[23387]: audit 2026-03-10T13:46:28.593995+0000 mon.a (mon.0) 481 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:46:29.838 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:46:29 vm08 bash[23387]: audit 2026-03-10T13:46:28.593995+0000 mon.a (mon.0) 481 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:46:29.838 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:46:29 vm08 bash[23387]: audit 2026-03-10T13:46:28.596181+0000 mon.a (mon.0) 482 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "mgr module enable", "module": "prometheus"}]: dispatch 2026-03-10T13:46:29.838 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:46:29 vm08 bash[23387]: audit 2026-03-10T13:46:28.596181+0000 mon.a (mon.0) 482 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "mgr module enable", "module": "prometheus"}]: dispatch 2026-03-10T13:46:29.846 INFO:journalctl@ceph.mgr.b.vm07.stdout:Mar 10 13:46:29 vm07 bash[23484]: ignoring --setuser ceph since I am not root 2026-03-10T13:46:29.847 INFO:journalctl@ceph.mgr.b.vm07.stdout:Mar 10 13:46:29 vm07 bash[23484]: ignoring --setgroup ceph since I am not root 2026-03-10T13:46:29.847 INFO:journalctl@ceph.mgr.b.vm07.stdout:Mar 10 13:46:29 vm07 bash[23484]: debug 2026-03-10T13:46:29.707+0000 7eff86d03140 -1 mgr[py] Module status has missing NOTIFY_TYPES member 2026-03-10T13:46:29.847 INFO:journalctl@ceph.mgr.b.vm07.stdout:Mar 10 13:46:29 vm07 bash[23484]: debug 2026-03-10T13:46:29.739+0000 7eff86d03140 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member 2026-03-10T13:46:29.847 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:46:29 vm07 bash[23044]: cluster 2026-03-10T13:46:28.229328+0000 mgr.a (mgr.14150) 193 : cluster [DBG] pgmap v131: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:46:29.847 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:46:29 vm07 bash[23044]: cluster 2026-03-10T13:46:28.229328+0000 mgr.a (mgr.14150) 193 : cluster [DBG] pgmap v131: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:46:29.847 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:46:29 vm07 bash[23044]: audit 2026-03-10T13:46:28.584743+0000 mon.a (mon.0) 479 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:46:29.847 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:46:29 vm07 bash[23044]: audit 2026-03-10T13:46:28.584743+0000 mon.a (mon.0) 479 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:46:29.847 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:46:29 vm07 bash[23044]: audit 2026-03-10T13:46:28.589958+0000 mon.a (mon.0) 480 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:46:29.847 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:46:29 vm07 bash[23044]: audit 2026-03-10T13:46:28.589958+0000 mon.a (mon.0) 480 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:46:29.847 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:46:29 vm07 bash[23044]: audit 2026-03-10T13:46:28.593995+0000 mon.a (mon.0) 481 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:46:29.847 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:46:29 vm07 bash[23044]: audit 2026-03-10T13:46:28.593995+0000 mon.a (mon.0) 481 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:46:29.847 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:46:29 vm07 bash[23044]: audit 2026-03-10T13:46:28.596181+0000 mon.a (mon.0) 482 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "mgr module enable", "module": "prometheus"}]: dispatch 2026-03-10T13:46:29.847 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:46:29 vm07 bash[23044]: audit 2026-03-10T13:46:28.596181+0000 mon.a (mon.0) 482 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "mgr module enable", "module": "prometheus"}]: dispatch 2026-03-10T13:46:29.967 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:46:29 vm00 bash[20748]: cluster 2026-03-10T13:46:28.229328+0000 mgr.a (mgr.14150) 193 : cluster [DBG] pgmap v131: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:46:29.967 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:46:29 vm00 bash[20748]: cluster 2026-03-10T13:46:28.229328+0000 mgr.a (mgr.14150) 193 : cluster [DBG] pgmap v131: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:46:29.967 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:46:29 vm00 bash[20748]: audit 2026-03-10T13:46:28.584743+0000 mon.a (mon.0) 479 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:46:29.967 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:46:29 vm00 bash[20748]: audit 2026-03-10T13:46:28.584743+0000 mon.a (mon.0) 479 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:46:29.967 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:46:29 vm00 bash[20748]: audit 2026-03-10T13:46:28.589958+0000 mon.a (mon.0) 480 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:46:29.967 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:46:29 vm00 bash[20748]: audit 2026-03-10T13:46:28.589958+0000 mon.a (mon.0) 480 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:46:29.967 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:46:29 vm00 bash[20748]: audit 2026-03-10T13:46:28.593995+0000 mon.a (mon.0) 481 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:46:29.967 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:46:29 vm00 bash[20748]: audit 2026-03-10T13:46:28.593995+0000 mon.a (mon.0) 481 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' 2026-03-10T13:46:29.967 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:46:29 vm00 bash[20748]: audit 2026-03-10T13:46:28.596181+0000 mon.a (mon.0) 482 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "mgr module enable", "module": "prometheus"}]: dispatch 2026-03-10T13:46:29.967 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:46:29 vm00 bash[20748]: audit 2026-03-10T13:46:28.596181+0000 mon.a (mon.0) 482 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd=[{"prefix": "mgr module enable", "module": "prometheus"}]: dispatch 2026-03-10T13:46:29.967 INFO:journalctl@ceph.mgr.a.vm00.stdout:Mar 10 13:46:29 vm00 bash[21015]: ignoring --setuser ceph since I am not root 2026-03-10T13:46:29.967 INFO:journalctl@ceph.mgr.a.vm00.stdout:Mar 10 13:46:29 vm00 bash[21015]: ignoring --setgroup ceph since I am not root 2026-03-10T13:46:29.967 INFO:journalctl@ceph.mgr.a.vm00.stdout:Mar 10 13:46:29 vm00 bash[21015]: debug 2026-03-10T13:46:29.696+0000 7f4fd9863140 -1 mgr[py] Module status has missing NOTIFY_TYPES member 2026-03-10T13:46:29.967 INFO:journalctl@ceph.mgr.a.vm00.stdout:Mar 10 13:46:29 vm00 bash[21015]: debug 2026-03-10T13:46:29.728+0000 7f4fd9863140 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member 2026-03-10T13:46:29.967 INFO:journalctl@ceph.mgr.a.vm00.stdout:Mar 10 13:46:29 vm00 bash[21015]: debug 2026-03-10T13:46:29.832+0000 7f4fd9863140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member 2026-03-10T13:46:30.116 INFO:journalctl@ceph.mgr.b.vm07.stdout:Mar 10 13:46:29 vm07 bash[23484]: debug 2026-03-10T13:46:29.843+0000 7eff86d03140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member 2026-03-10T13:46:30.467 INFO:journalctl@ceph.mgr.a.vm00.stdout:Mar 10 13:46:30 vm00 bash[21015]: debug 2026-03-10T13:46:30.104+0000 7f4fd9863140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member 2026-03-10T13:46:30.498 INFO:journalctl@ceph.mgr.b.vm07.stdout:Mar 10 13:46:30 vm07 bash[23484]: debug 2026-03-10T13:46:30.115+0000 7eff86d03140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member 2026-03-10T13:46:30.895 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:46:30 vm00 bash[20748]: audit 2026-03-10T13:46:29.597451+0000 mon.a (mon.0) 483 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd='[{"prefix": "mgr module enable", "module": "prometheus"}]': finished 2026-03-10T13:46:30.895 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:46:30 vm00 bash[20748]: audit 2026-03-10T13:46:29.597451+0000 mon.a (mon.0) 483 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd='[{"prefix": "mgr module enable", "module": "prometheus"}]': finished 2026-03-10T13:46:30.895 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:46:30 vm00 bash[20748]: cluster 2026-03-10T13:46:29.601227+0000 mon.a (mon.0) 484 : cluster [DBG] mgrmap e15: a(active, since 3m), standbys: b 2026-03-10T13:46:30.895 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:46:30 vm00 bash[20748]: cluster 2026-03-10T13:46:29.601227+0000 mon.a (mon.0) 484 : cluster [DBG] mgrmap e15: a(active, since 3m), standbys: b 2026-03-10T13:46:30.895 INFO:journalctl@ceph.mgr.a.vm00.stdout:Mar 10 13:46:30 vm00 bash[21015]: debug 2026-03-10T13:46:30.544+0000 7f4fd9863140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member 2026-03-10T13:46:30.895 INFO:journalctl@ceph.mgr.a.vm00.stdout:Mar 10 13:46:30 vm00 bash[21015]: debug 2026-03-10T13:46:30.624+0000 7f4fd9863140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member 2026-03-10T13:46:30.895 INFO:journalctl@ceph.mgr.a.vm00.stdout:Mar 10 13:46:30 vm00 bash[21015]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode. 2026-03-10T13:46:30.895 INFO:journalctl@ceph.mgr.a.vm00.stdout:Mar 10 13:46:30 vm00 bash[21015]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve. 2026-03-10T13:46:30.895 INFO:journalctl@ceph.mgr.a.vm00.stdout:Mar 10 13:46:30 vm00 bash[21015]: from numpy import show_config as show_numpy_config 2026-03-10T13:46:30.895 INFO:journalctl@ceph.mgr.a.vm00.stdout:Mar 10 13:46:30 vm00 bash[21015]: debug 2026-03-10T13:46:30.748+0000 7f4fd9863140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member 2026-03-10T13:46:30.905 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:46:30 vm07 bash[23044]: audit 2026-03-10T13:46:29.597451+0000 mon.a (mon.0) 483 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd='[{"prefix": "mgr module enable", "module": "prometheus"}]': finished 2026-03-10T13:46:30.905 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:46:30 vm07 bash[23044]: audit 2026-03-10T13:46:29.597451+0000 mon.a (mon.0) 483 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd='[{"prefix": "mgr module enable", "module": "prometheus"}]': finished 2026-03-10T13:46:30.905 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:46:30 vm07 bash[23044]: cluster 2026-03-10T13:46:29.601227+0000 mon.a (mon.0) 484 : cluster [DBG] mgrmap e15: a(active, since 3m), standbys: b 2026-03-10T13:46:30.905 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:46:30 vm07 bash[23044]: cluster 2026-03-10T13:46:29.601227+0000 mon.a (mon.0) 484 : cluster [DBG] mgrmap e15: a(active, since 3m), standbys: b 2026-03-10T13:46:30.906 INFO:journalctl@ceph.mgr.b.vm07.stdout:Mar 10 13:46:30 vm07 bash[23484]: debug 2026-03-10T13:46:30.543+0000 7eff86d03140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member 2026-03-10T13:46:30.906 INFO:journalctl@ceph.mgr.b.vm07.stdout:Mar 10 13:46:30 vm07 bash[23484]: debug 2026-03-10T13:46:30.627+0000 7eff86d03140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member 2026-03-10T13:46:30.906 INFO:journalctl@ceph.mgr.b.vm07.stdout:Mar 10 13:46:30 vm07 bash[23484]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode. 2026-03-10T13:46:30.906 INFO:journalctl@ceph.mgr.b.vm07.stdout:Mar 10 13:46:30 vm07 bash[23484]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve. 2026-03-10T13:46:30.906 INFO:journalctl@ceph.mgr.b.vm07.stdout:Mar 10 13:46:30 vm07 bash[23484]: from numpy import show_config as show_numpy_config 2026-03-10T13:46:30.906 INFO:journalctl@ceph.mgr.b.vm07.stdout:Mar 10 13:46:30 vm07 bash[23484]: debug 2026-03-10T13:46:30.755+0000 7eff86d03140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member 2026-03-10T13:46:31.088 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:46:30 vm08 bash[23387]: audit 2026-03-10T13:46:29.597451+0000 mon.a (mon.0) 483 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd='[{"prefix": "mgr module enable", "module": "prometheus"}]': finished 2026-03-10T13:46:31.088 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:46:30 vm08 bash[23387]: audit 2026-03-10T13:46:29.597451+0000 mon.a (mon.0) 483 : audit [INF] from='mgr.14150 192.168.123.100:0/3275131541' entity='mgr.a' cmd='[{"prefix": "mgr module enable", "module": "prometheus"}]': finished 2026-03-10T13:46:31.088 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:46:30 vm08 bash[23387]: cluster 2026-03-10T13:46:29.601227+0000 mon.a (mon.0) 484 : cluster [DBG] mgrmap e15: a(active, since 3m), standbys: b 2026-03-10T13:46:31.088 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:46:30 vm08 bash[23387]: cluster 2026-03-10T13:46:29.601227+0000 mon.a (mon.0) 484 : cluster [DBG] mgrmap e15: a(active, since 3m), standbys: b 2026-03-10T13:46:31.217 INFO:journalctl@ceph.mgr.a.vm00.stdout:Mar 10 13:46:30 vm00 bash[21015]: debug 2026-03-10T13:46:30.888+0000 7f4fd9863140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member 2026-03-10T13:46:31.217 INFO:journalctl@ceph.mgr.a.vm00.stdout:Mar 10 13:46:30 vm00 bash[21015]: debug 2026-03-10T13:46:30.928+0000 7f4fd9863140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member 2026-03-10T13:46:31.217 INFO:journalctl@ceph.mgr.a.vm00.stdout:Mar 10 13:46:30 vm00 bash[21015]: debug 2026-03-10T13:46:30.964+0000 7f4fd9863140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member 2026-03-10T13:46:31.217 INFO:journalctl@ceph.mgr.a.vm00.stdout:Mar 10 13:46:31 vm00 bash[21015]: debug 2026-03-10T13:46:31.008+0000 7f4fd9863140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member 2026-03-10T13:46:31.217 INFO:journalctl@ceph.mgr.a.vm00.stdout:Mar 10 13:46:31 vm00 bash[21015]: debug 2026-03-10T13:46:31.060+0000 7f4fd9863140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member 2026-03-10T13:46:31.248 INFO:journalctl@ceph.mgr.b.vm07.stdout:Mar 10 13:46:30 vm07 bash[23484]: debug 2026-03-10T13:46:30.903+0000 7eff86d03140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member 2026-03-10T13:46:31.248 INFO:journalctl@ceph.mgr.b.vm07.stdout:Mar 10 13:46:30 vm07 bash[23484]: debug 2026-03-10T13:46:30.943+0000 7eff86d03140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member 2026-03-10T13:46:31.248 INFO:journalctl@ceph.mgr.b.vm07.stdout:Mar 10 13:46:30 vm07 bash[23484]: debug 2026-03-10T13:46:30.979+0000 7eff86d03140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member 2026-03-10T13:46:31.248 INFO:journalctl@ceph.mgr.b.vm07.stdout:Mar 10 13:46:31 vm07 bash[23484]: debug 2026-03-10T13:46:31.023+0000 7eff86d03140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member 2026-03-10T13:46:31.248 INFO:journalctl@ceph.mgr.b.vm07.stdout:Mar 10 13:46:31 vm07 bash[23484]: debug 2026-03-10T13:46:31.075+0000 7eff86d03140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member 2026-03-10T13:46:31.763 INFO:journalctl@ceph.mgr.a.vm00.stdout:Mar 10 13:46:31 vm00 bash[21015]: debug 2026-03-10T13:46:31.480+0000 7f4fd9863140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member 2026-03-10T13:46:31.763 INFO:journalctl@ceph.mgr.a.vm00.stdout:Mar 10 13:46:31 vm00 bash[21015]: debug 2026-03-10T13:46:31.512+0000 7f4fd9863140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member 2026-03-10T13:46:31.763 INFO:journalctl@ceph.mgr.a.vm00.stdout:Mar 10 13:46:31 vm00 bash[21015]: debug 2026-03-10T13:46:31.548+0000 7f4fd9863140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member 2026-03-10T13:46:31.763 INFO:journalctl@ceph.mgr.a.vm00.stdout:Mar 10 13:46:31 vm00 bash[21015]: debug 2026-03-10T13:46:31.684+0000 7f4fd9863140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member 2026-03-10T13:46:31.763 INFO:journalctl@ceph.mgr.a.vm00.stdout:Mar 10 13:46:31 vm00 bash[21015]: debug 2026-03-10T13:46:31.720+0000 7f4fd9863140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member 2026-03-10T13:46:31.789 INFO:journalctl@ceph.mgr.b.vm07.stdout:Mar 10 13:46:31 vm07 bash[23484]: debug 2026-03-10T13:46:31.499+0000 7eff86d03140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member 2026-03-10T13:46:31.789 INFO:journalctl@ceph.mgr.b.vm07.stdout:Mar 10 13:46:31 vm07 bash[23484]: debug 2026-03-10T13:46:31.535+0000 7eff86d03140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member 2026-03-10T13:46:31.789 INFO:journalctl@ceph.mgr.b.vm07.stdout:Mar 10 13:46:31 vm07 bash[23484]: debug 2026-03-10T13:46:31.571+0000 7eff86d03140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member 2026-03-10T13:46:31.789 INFO:journalctl@ceph.mgr.b.vm07.stdout:Mar 10 13:46:31 vm07 bash[23484]: debug 2026-03-10T13:46:31.707+0000 7eff86d03140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member 2026-03-10T13:46:31.789 INFO:journalctl@ceph.mgr.b.vm07.stdout:Mar 10 13:46:31 vm07 bash[23484]: debug 2026-03-10T13:46:31.747+0000 7eff86d03140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member 2026-03-10T13:46:32.029 INFO:journalctl@ceph.mgr.a.vm00.stdout:Mar 10 13:46:31 vm00 bash[21015]: debug 2026-03-10T13:46:31.760+0000 7f4fd9863140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member 2026-03-10T13:46:32.029 INFO:journalctl@ceph.mgr.a.vm00.stdout:Mar 10 13:46:31 vm00 bash[21015]: debug 2026-03-10T13:46:31.864+0000 7f4fd9863140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member 2026-03-10T13:46:32.050 INFO:journalctl@ceph.mgr.b.vm07.stdout:Mar 10 13:46:31 vm07 bash[23484]: debug 2026-03-10T13:46:31.787+0000 7eff86d03140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member 2026-03-10T13:46:32.050 INFO:journalctl@ceph.mgr.b.vm07.stdout:Mar 10 13:46:31 vm07 bash[23484]: debug 2026-03-10T13:46:31.891+0000 7eff86d03140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member 2026-03-10T13:46:32.306 INFO:journalctl@ceph.mgr.b.vm07.stdout:Mar 10 13:46:32 vm07 bash[23484]: debug 2026-03-10T13:46:32.047+0000 7eff86d03140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member 2026-03-10T13:46:32.306 INFO:journalctl@ceph.mgr.b.vm07.stdout:Mar 10 13:46:32 vm07 bash[23484]: debug 2026-03-10T13:46:32.223+0000 7eff86d03140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member 2026-03-10T13:46:32.306 INFO:journalctl@ceph.mgr.b.vm07.stdout:Mar 10 13:46:32 vm07 bash[23484]: debug 2026-03-10T13:46:32.259+0000 7eff86d03140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member 2026-03-10T13:46:32.418 INFO:journalctl@ceph.mgr.a.vm00.stdout:Mar 10 13:46:32 vm00 bash[21015]: debug 2026-03-10T13:46:32.024+0000 7f4fd9863140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member 2026-03-10T13:46:32.418 INFO:journalctl@ceph.mgr.a.vm00.stdout:Mar 10 13:46:32 vm00 bash[21015]: debug 2026-03-10T13:46:32.192+0000 7f4fd9863140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member 2026-03-10T13:46:32.418 INFO:journalctl@ceph.mgr.a.vm00.stdout:Mar 10 13:46:32 vm00 bash[21015]: debug 2026-03-10T13:46:32.228+0000 7f4fd9863140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member 2026-03-10T13:46:32.418 INFO:journalctl@ceph.mgr.a.vm00.stdout:Mar 10 13:46:32 vm00 bash[21015]: debug 2026-03-10T13:46:32.268+0000 7f4fd9863140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member 2026-03-10T13:46:32.681 INFO:journalctl@ceph.mgr.b.vm07.stdout:Mar 10 13:46:32 vm07 bash[23484]: debug 2026-03-10T13:46:32.303+0000 7eff86d03140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member 2026-03-10T13:46:32.681 INFO:journalctl@ceph.mgr.b.vm07.stdout:Mar 10 13:46:32 vm07 bash[23484]: debug 2026-03-10T13:46:32.451+0000 7eff86d03140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member 2026-03-10T13:46:32.687 INFO:journalctl@ceph.mgr.a.vm00.stdout:Mar 10 13:46:32 vm00 bash[21015]: debug 2026-03-10T13:46:32.412+0000 7f4fd9863140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member 2026-03-10T13:46:32.687 INFO:journalctl@ceph.mgr.a.vm00.stdout:Mar 10 13:46:32 vm00 bash[21015]: debug 2026-03-10T13:46:32.624+0000 7f4fd9863140 -1 mgr[py] Module snap_schedule has missing NOTIFY_TYPES member 2026-03-10T13:46:32.967 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:46:32 vm00 bash[20748]: cluster 2026-03-10T13:46:32.631641+0000 mon.a (mon.0) 485 : cluster [INF] Active manager daemon a restarted 2026-03-10T13:46:32.967 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:46:32 vm00 bash[20748]: cluster 2026-03-10T13:46:32.631641+0000 mon.a (mon.0) 485 : cluster [INF] Active manager daemon a restarted 2026-03-10T13:46:32.967 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:46:32 vm00 bash[20748]: cluster 2026-03-10T13:46:32.631847+0000 mon.a (mon.0) 486 : cluster [INF] Activating manager daemon a 2026-03-10T13:46:32.967 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:46:32 vm00 bash[20748]: cluster 2026-03-10T13:46:32.631847+0000 mon.a (mon.0) 486 : cluster [INF] Activating manager daemon a 2026-03-10T13:46:32.967 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:46:32 vm00 bash[20748]: cluster 2026-03-10T13:46:32.646219+0000 mon.a (mon.0) 487 : cluster [DBG] osdmap e23: 3 total, 3 up, 3 in 2026-03-10T13:46:32.967 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:46:32 vm00 bash[20748]: cluster 2026-03-10T13:46:32.646219+0000 mon.a (mon.0) 487 : cluster [DBG] osdmap e23: 3 total, 3 up, 3 in 2026-03-10T13:46:32.967 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:46:32 vm00 bash[20748]: cluster 2026-03-10T13:46:32.646545+0000 mon.a (mon.0) 488 : cluster [DBG] mgrmap e16: a(active, starting, since 0.0147801s), standbys: b 2026-03-10T13:46:32.967 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:46:32 vm00 bash[20748]: cluster 2026-03-10T13:46:32.646545+0000 mon.a (mon.0) 488 : cluster [DBG] mgrmap e16: a(active, starting, since 0.0147801s), standbys: b 2026-03-10T13:46:32.967 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:46:32 vm00 bash[20748]: audit 2026-03-10T13:46:32.648636+0000 mon.a (mon.0) 489 : audit [DBG] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-10T13:46:32.967 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:46:32 vm00 bash[20748]: audit 2026-03-10T13:46:32.648636+0000 mon.a (mon.0) 489 : audit [DBG] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-10T13:46:32.967 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:46:32 vm00 bash[20748]: audit 2026-03-10T13:46:32.648694+0000 mon.a (mon.0) 490 : audit [DBG] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T13:46:32.967 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:46:32 vm00 bash[20748]: audit 2026-03-10T13:46:32.648694+0000 mon.a (mon.0) 490 : audit [DBG] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T13:46:32.967 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:46:32 vm00 bash[20748]: audit 2026-03-10T13:46:32.648736+0000 mon.a (mon.0) 491 : audit [DBG] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T13:46:32.967 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:46:32 vm00 bash[20748]: audit 2026-03-10T13:46:32.648736+0000 mon.a (mon.0) 491 : audit [DBG] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T13:46:32.967 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:46:32 vm00 bash[20748]: audit 2026-03-10T13:46:32.649140+0000 mon.a (mon.0) 492 : audit [DBG] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' cmd=[{"prefix": "mgr metadata", "who": "a", "id": "a"}]: dispatch 2026-03-10T13:46:32.967 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:46:32 vm00 bash[20748]: audit 2026-03-10T13:46:32.649140+0000 mon.a (mon.0) 492 : audit [DBG] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' cmd=[{"prefix": "mgr metadata", "who": "a", "id": "a"}]: dispatch 2026-03-10T13:46:32.967 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:46:32 vm00 bash[20748]: audit 2026-03-10T13:46:32.649489+0000 mon.a (mon.0) 493 : audit [DBG] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' cmd=[{"prefix": "mgr metadata", "who": "b", "id": "b"}]: dispatch 2026-03-10T13:46:32.967 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:46:32 vm00 bash[20748]: audit 2026-03-10T13:46:32.649489+0000 mon.a (mon.0) 493 : audit [DBG] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' cmd=[{"prefix": "mgr metadata", "who": "b", "id": "b"}]: dispatch 2026-03-10T13:46:32.967 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:46:32 vm00 bash[20748]: audit 2026-03-10T13:46:32.649855+0000 mon.a (mon.0) 494 : audit [DBG] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T13:46:32.967 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:46:32 vm00 bash[20748]: audit 2026-03-10T13:46:32.649855+0000 mon.a (mon.0) 494 : audit [DBG] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T13:46:32.968 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:46:32 vm00 bash[20748]: audit 2026-03-10T13:46:32.650238+0000 mon.a (mon.0) 495 : audit [DBG] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T13:46:32.968 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:46:32 vm00 bash[20748]: audit 2026-03-10T13:46:32.650238+0000 mon.a (mon.0) 495 : audit [DBG] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T13:46:32.968 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:46:32 vm00 bash[20748]: audit 2026-03-10T13:46:32.650597+0000 mon.a (mon.0) 496 : audit [DBG] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T13:46:32.968 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:46:32 vm00 bash[20748]: audit 2026-03-10T13:46:32.650597+0000 mon.a (mon.0) 496 : audit [DBG] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T13:46:32.968 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:46:32 vm00 bash[20748]: audit 2026-03-10T13:46:32.651075+0000 mon.a (mon.0) 497 : audit [DBG] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' cmd=[{"prefix": "mds metadata"}]: dispatch 2026-03-10T13:46:32.968 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:46:32 vm00 bash[20748]: audit 2026-03-10T13:46:32.651075+0000 mon.a (mon.0) 497 : audit [DBG] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' cmd=[{"prefix": "mds metadata"}]: dispatch 2026-03-10T13:46:32.968 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:46:32 vm00 bash[20748]: audit 2026-03-10T13:46:32.651493+0000 mon.a (mon.0) 498 : audit [DBG] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' cmd=[{"prefix": "osd metadata"}]: dispatch 2026-03-10T13:46:32.968 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:46:32 vm00 bash[20748]: audit 2026-03-10T13:46:32.651493+0000 mon.a (mon.0) 498 : audit [DBG] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' cmd=[{"prefix": "osd metadata"}]: dispatch 2026-03-10T13:46:32.968 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:46:32 vm00 bash[20748]: audit 2026-03-10T13:46:32.651942+0000 mon.a (mon.0) 499 : audit [DBG] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' cmd=[{"prefix": "mon metadata"}]: dispatch 2026-03-10T13:46:32.968 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:46:32 vm00 bash[20748]: audit 2026-03-10T13:46:32.651942+0000 mon.a (mon.0) 499 : audit [DBG] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' cmd=[{"prefix": "mon metadata"}]: dispatch 2026-03-10T13:46:32.968 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:46:32 vm00 bash[20748]: cluster 2026-03-10T13:46:32.658240+0000 mon.a (mon.0) 500 : cluster [INF] Manager daemon a is now available 2026-03-10T13:46:32.968 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:46:32 vm00 bash[20748]: cluster 2026-03-10T13:46:32.658240+0000 mon.a (mon.0) 500 : cluster [INF] Manager daemon a is now available 2026-03-10T13:46:32.968 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:46:32 vm00 bash[20748]: audit 2026-03-10T13:46:32.679723+0000 mon.a (mon.0) 501 : audit [INF] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' 2026-03-10T13:46:32.968 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:46:32 vm00 bash[20748]: audit 2026-03-10T13:46:32.679723+0000 mon.a (mon.0) 501 : audit [INF] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' 2026-03-10T13:46:32.968 INFO:journalctl@ceph.mgr.a.vm00.stdout:Mar 10 13:46:32 vm00 bash[21015]: [10/Mar/2026:13:46:32] ENGINE Bus STARTING 2026-03-10T13:46:32.968 INFO:journalctl@ceph.mgr.a.vm00.stdout:Mar 10 13:46:32 vm00 bash[21015]: CherryPy Checker: 2026-03-10T13:46:32.968 INFO:journalctl@ceph.mgr.a.vm00.stdout:Mar 10 13:46:32 vm00 bash[21015]: The Application mounted at '' has an empty config. 2026-03-10T13:46:32.968 INFO:journalctl@ceph.mgr.a.vm00.stdout:Mar 10 13:46:32 vm00 bash[21015]: [10/Mar/2026:13:46:32] ENGINE Serving on http://:::9283 2026-03-10T13:46:32.968 INFO:journalctl@ceph.mgr.a.vm00.stdout:Mar 10 13:46:32 vm00 bash[21015]: [10/Mar/2026:13:46:32] ENGINE Bus STARTED 2026-03-10T13:46:32.998 INFO:journalctl@ceph.mgr.b.vm07.stdout:Mar 10 13:46:32 vm07 bash[23484]: debug 2026-03-10T13:46:32.679+0000 7eff86d03140 -1 mgr[py] Module snap_schedule has missing NOTIFY_TYPES member 2026-03-10T13:46:32.998 INFO:journalctl@ceph.mgr.b.vm07.stdout:Mar 10 13:46:32 vm07 bash[23484]: [10/Mar/2026:13:46:32] ENGINE Bus STARTING 2026-03-10T13:46:32.998 INFO:journalctl@ceph.mgr.b.vm07.stdout:Mar 10 13:46:32 vm07 bash[23484]: CherryPy Checker: 2026-03-10T13:46:32.998 INFO:journalctl@ceph.mgr.b.vm07.stdout:Mar 10 13:46:32 vm07 bash[23484]: The Application mounted at '' has an empty config. 2026-03-10T13:46:32.998 INFO:journalctl@ceph.mgr.b.vm07.stdout:Mar 10 13:46:32 vm07 bash[23484]: [10/Mar/2026:13:46:32] ENGINE Serving on http://:::9283 2026-03-10T13:46:32.998 INFO:journalctl@ceph.mgr.b.vm07.stdout:Mar 10 13:46:32 vm07 bash[23484]: [10/Mar/2026:13:46:32] ENGINE Bus STARTED 2026-03-10T13:46:32.998 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:46:32 vm07 bash[23044]: cluster 2026-03-10T13:46:32.631641+0000 mon.a (mon.0) 485 : cluster [INF] Active manager daemon a restarted 2026-03-10T13:46:32.999 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:46:32 vm07 bash[23044]: cluster 2026-03-10T13:46:32.631641+0000 mon.a (mon.0) 485 : cluster [INF] Active manager daemon a restarted 2026-03-10T13:46:32.999 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:46:32 vm07 bash[23044]: cluster 2026-03-10T13:46:32.631847+0000 mon.a (mon.0) 486 : cluster [INF] Activating manager daemon a 2026-03-10T13:46:32.999 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:46:32 vm07 bash[23044]: cluster 2026-03-10T13:46:32.631847+0000 mon.a (mon.0) 486 : cluster [INF] Activating manager daemon a 2026-03-10T13:46:32.999 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:46:32 vm07 bash[23044]: cluster 2026-03-10T13:46:32.646219+0000 mon.a (mon.0) 487 : cluster [DBG] osdmap e23: 3 total, 3 up, 3 in 2026-03-10T13:46:32.999 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:46:32 vm07 bash[23044]: cluster 2026-03-10T13:46:32.646219+0000 mon.a (mon.0) 487 : cluster [DBG] osdmap e23: 3 total, 3 up, 3 in 2026-03-10T13:46:32.999 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:46:32 vm07 bash[23044]: cluster 2026-03-10T13:46:32.646545+0000 mon.a (mon.0) 488 : cluster [DBG] mgrmap e16: a(active, starting, since 0.0147801s), standbys: b 2026-03-10T13:46:32.999 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:46:32 vm07 bash[23044]: cluster 2026-03-10T13:46:32.646545+0000 mon.a (mon.0) 488 : cluster [DBG] mgrmap e16: a(active, starting, since 0.0147801s), standbys: b 2026-03-10T13:46:32.999 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:46:32 vm07 bash[23044]: audit 2026-03-10T13:46:32.648636+0000 mon.a (mon.0) 489 : audit [DBG] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-10T13:46:32.999 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:46:32 vm07 bash[23044]: audit 2026-03-10T13:46:32.648636+0000 mon.a (mon.0) 489 : audit [DBG] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-10T13:46:32.999 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:46:32 vm07 bash[23044]: audit 2026-03-10T13:46:32.648694+0000 mon.a (mon.0) 490 : audit [DBG] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T13:46:32.999 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:46:32 vm07 bash[23044]: audit 2026-03-10T13:46:32.648694+0000 mon.a (mon.0) 490 : audit [DBG] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T13:46:32.999 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:46:32 vm07 bash[23044]: audit 2026-03-10T13:46:32.648736+0000 mon.a (mon.0) 491 : audit [DBG] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T13:46:32.999 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:46:32 vm07 bash[23044]: audit 2026-03-10T13:46:32.648736+0000 mon.a (mon.0) 491 : audit [DBG] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T13:46:32.999 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:46:32 vm07 bash[23044]: audit 2026-03-10T13:46:32.649140+0000 mon.a (mon.0) 492 : audit [DBG] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' cmd=[{"prefix": "mgr metadata", "who": "a", "id": "a"}]: dispatch 2026-03-10T13:46:32.999 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:46:32 vm07 bash[23044]: audit 2026-03-10T13:46:32.649140+0000 mon.a (mon.0) 492 : audit [DBG] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' cmd=[{"prefix": "mgr metadata", "who": "a", "id": "a"}]: dispatch 2026-03-10T13:46:32.999 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:46:32 vm07 bash[23044]: audit 2026-03-10T13:46:32.649489+0000 mon.a (mon.0) 493 : audit [DBG] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' cmd=[{"prefix": "mgr metadata", "who": "b", "id": "b"}]: dispatch 2026-03-10T13:46:32.999 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:46:32 vm07 bash[23044]: audit 2026-03-10T13:46:32.649489+0000 mon.a (mon.0) 493 : audit [DBG] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' cmd=[{"prefix": "mgr metadata", "who": "b", "id": "b"}]: dispatch 2026-03-10T13:46:32.999 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:46:32 vm07 bash[23044]: audit 2026-03-10T13:46:32.649855+0000 mon.a (mon.0) 494 : audit [DBG] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T13:46:32.999 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:46:32 vm07 bash[23044]: audit 2026-03-10T13:46:32.649855+0000 mon.a (mon.0) 494 : audit [DBG] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T13:46:32.999 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:46:32 vm07 bash[23044]: audit 2026-03-10T13:46:32.650238+0000 mon.a (mon.0) 495 : audit [DBG] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T13:46:32.999 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:46:32 vm07 bash[23044]: audit 2026-03-10T13:46:32.650238+0000 mon.a (mon.0) 495 : audit [DBG] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T13:46:32.999 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:46:32 vm07 bash[23044]: audit 2026-03-10T13:46:32.650597+0000 mon.a (mon.0) 496 : audit [DBG] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T13:46:32.999 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:46:32 vm07 bash[23044]: audit 2026-03-10T13:46:32.650597+0000 mon.a (mon.0) 496 : audit [DBG] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T13:46:32.999 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:46:32 vm07 bash[23044]: audit 2026-03-10T13:46:32.651075+0000 mon.a (mon.0) 497 : audit [DBG] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' cmd=[{"prefix": "mds metadata"}]: dispatch 2026-03-10T13:46:32.999 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:46:32 vm07 bash[23044]: audit 2026-03-10T13:46:32.651075+0000 mon.a (mon.0) 497 : audit [DBG] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' cmd=[{"prefix": "mds metadata"}]: dispatch 2026-03-10T13:46:32.999 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:46:32 vm07 bash[23044]: audit 2026-03-10T13:46:32.651493+0000 mon.a (mon.0) 498 : audit [DBG] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' cmd=[{"prefix": "osd metadata"}]: dispatch 2026-03-10T13:46:32.999 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:46:32 vm07 bash[23044]: audit 2026-03-10T13:46:32.651493+0000 mon.a (mon.0) 498 : audit [DBG] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' cmd=[{"prefix": "osd metadata"}]: dispatch 2026-03-10T13:46:32.999 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:46:32 vm07 bash[23044]: audit 2026-03-10T13:46:32.651942+0000 mon.a (mon.0) 499 : audit [DBG] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' cmd=[{"prefix": "mon metadata"}]: dispatch 2026-03-10T13:46:32.999 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:46:32 vm07 bash[23044]: audit 2026-03-10T13:46:32.651942+0000 mon.a (mon.0) 499 : audit [DBG] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' cmd=[{"prefix": "mon metadata"}]: dispatch 2026-03-10T13:46:32.999 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:46:32 vm07 bash[23044]: cluster 2026-03-10T13:46:32.658240+0000 mon.a (mon.0) 500 : cluster [INF] Manager daemon a is now available 2026-03-10T13:46:32.999 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:46:32 vm07 bash[23044]: cluster 2026-03-10T13:46:32.658240+0000 mon.a (mon.0) 500 : cluster [INF] Manager daemon a is now available 2026-03-10T13:46:32.999 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:46:32 vm07 bash[23044]: audit 2026-03-10T13:46:32.679723+0000 mon.a (mon.0) 501 : audit [INF] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' 2026-03-10T13:46:32.999 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:46:32 vm07 bash[23044]: audit 2026-03-10T13:46:32.679723+0000 mon.a (mon.0) 501 : audit [INF] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' 2026-03-10T13:46:33.088 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:46:32 vm08 bash[23387]: cluster 2026-03-10T13:46:32.631641+0000 mon.a (mon.0) 485 : cluster [INF] Active manager daemon a restarted 2026-03-10T13:46:33.088 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:46:32 vm08 bash[23387]: cluster 2026-03-10T13:46:32.631641+0000 mon.a (mon.0) 485 : cluster [INF] Active manager daemon a restarted 2026-03-10T13:46:33.088 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:46:32 vm08 bash[23387]: cluster 2026-03-10T13:46:32.631847+0000 mon.a (mon.0) 486 : cluster [INF] Activating manager daemon a 2026-03-10T13:46:33.088 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:46:32 vm08 bash[23387]: cluster 2026-03-10T13:46:32.631847+0000 mon.a (mon.0) 486 : cluster [INF] Activating manager daemon a 2026-03-10T13:46:33.088 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:46:32 vm08 bash[23387]: cluster 2026-03-10T13:46:32.646219+0000 mon.a (mon.0) 487 : cluster [DBG] osdmap e23: 3 total, 3 up, 3 in 2026-03-10T13:46:33.088 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:46:32 vm08 bash[23387]: cluster 2026-03-10T13:46:32.646219+0000 mon.a (mon.0) 487 : cluster [DBG] osdmap e23: 3 total, 3 up, 3 in 2026-03-10T13:46:33.088 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:46:32 vm08 bash[23387]: cluster 2026-03-10T13:46:32.646545+0000 mon.a (mon.0) 488 : cluster [DBG] mgrmap e16: a(active, starting, since 0.0147801s), standbys: b 2026-03-10T13:46:33.088 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:46:32 vm08 bash[23387]: cluster 2026-03-10T13:46:32.646545+0000 mon.a (mon.0) 488 : cluster [DBG] mgrmap e16: a(active, starting, since 0.0147801s), standbys: b 2026-03-10T13:46:33.088 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:46:32 vm08 bash[23387]: audit 2026-03-10T13:46:32.648636+0000 mon.a (mon.0) 489 : audit [DBG] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-10T13:46:33.088 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:46:32 vm08 bash[23387]: audit 2026-03-10T13:46:32.648636+0000 mon.a (mon.0) 489 : audit [DBG] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-10T13:46:33.088 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:46:32 vm08 bash[23387]: audit 2026-03-10T13:46:32.648694+0000 mon.a (mon.0) 490 : audit [DBG] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T13:46:33.088 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:46:32 vm08 bash[23387]: audit 2026-03-10T13:46:32.648694+0000 mon.a (mon.0) 490 : audit [DBG] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T13:46:33.088 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:46:32 vm08 bash[23387]: audit 2026-03-10T13:46:32.648736+0000 mon.a (mon.0) 491 : audit [DBG] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T13:46:33.088 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:46:32 vm08 bash[23387]: audit 2026-03-10T13:46:32.648736+0000 mon.a (mon.0) 491 : audit [DBG] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T13:46:33.088 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:46:32 vm08 bash[23387]: audit 2026-03-10T13:46:32.649140+0000 mon.a (mon.0) 492 : audit [DBG] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' cmd=[{"prefix": "mgr metadata", "who": "a", "id": "a"}]: dispatch 2026-03-10T13:46:33.088 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:46:32 vm08 bash[23387]: audit 2026-03-10T13:46:32.649140+0000 mon.a (mon.0) 492 : audit [DBG] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' cmd=[{"prefix": "mgr metadata", "who": "a", "id": "a"}]: dispatch 2026-03-10T13:46:33.088 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:46:32 vm08 bash[23387]: audit 2026-03-10T13:46:32.649489+0000 mon.a (mon.0) 493 : audit [DBG] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' cmd=[{"prefix": "mgr metadata", "who": "b", "id": "b"}]: dispatch 2026-03-10T13:46:33.089 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:46:32 vm08 bash[23387]: audit 2026-03-10T13:46:32.649489+0000 mon.a (mon.0) 493 : audit [DBG] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' cmd=[{"prefix": "mgr metadata", "who": "b", "id": "b"}]: dispatch 2026-03-10T13:46:33.089 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:46:32 vm08 bash[23387]: audit 2026-03-10T13:46:32.649855+0000 mon.a (mon.0) 494 : audit [DBG] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T13:46:33.089 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:46:32 vm08 bash[23387]: audit 2026-03-10T13:46:32.649855+0000 mon.a (mon.0) 494 : audit [DBG] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T13:46:33.089 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:46:32 vm08 bash[23387]: audit 2026-03-10T13:46:32.650238+0000 mon.a (mon.0) 495 : audit [DBG] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T13:46:33.089 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:46:32 vm08 bash[23387]: audit 2026-03-10T13:46:32.650238+0000 mon.a (mon.0) 495 : audit [DBG] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T13:46:33.089 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:46:32 vm08 bash[23387]: audit 2026-03-10T13:46:32.650597+0000 mon.a (mon.0) 496 : audit [DBG] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T13:46:33.089 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:46:32 vm08 bash[23387]: audit 2026-03-10T13:46:32.650597+0000 mon.a (mon.0) 496 : audit [DBG] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T13:46:33.089 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:46:32 vm08 bash[23387]: audit 2026-03-10T13:46:32.651075+0000 mon.a (mon.0) 497 : audit [DBG] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' cmd=[{"prefix": "mds metadata"}]: dispatch 2026-03-10T13:46:33.089 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:46:32 vm08 bash[23387]: audit 2026-03-10T13:46:32.651075+0000 mon.a (mon.0) 497 : audit [DBG] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' cmd=[{"prefix": "mds metadata"}]: dispatch 2026-03-10T13:46:33.089 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:46:32 vm08 bash[23387]: audit 2026-03-10T13:46:32.651493+0000 mon.a (mon.0) 498 : audit [DBG] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' cmd=[{"prefix": "osd metadata"}]: dispatch 2026-03-10T13:46:33.089 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:46:32 vm08 bash[23387]: audit 2026-03-10T13:46:32.651493+0000 mon.a (mon.0) 498 : audit [DBG] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' cmd=[{"prefix": "osd metadata"}]: dispatch 2026-03-10T13:46:33.089 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:46:32 vm08 bash[23387]: audit 2026-03-10T13:46:32.651942+0000 mon.a (mon.0) 499 : audit [DBG] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' cmd=[{"prefix": "mon metadata"}]: dispatch 2026-03-10T13:46:33.089 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:46:32 vm08 bash[23387]: audit 2026-03-10T13:46:32.651942+0000 mon.a (mon.0) 499 : audit [DBG] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' cmd=[{"prefix": "mon metadata"}]: dispatch 2026-03-10T13:46:33.089 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:46:32 vm08 bash[23387]: cluster 2026-03-10T13:46:32.658240+0000 mon.a (mon.0) 500 : cluster [INF] Manager daemon a is now available 2026-03-10T13:46:33.089 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:46:32 vm08 bash[23387]: cluster 2026-03-10T13:46:32.658240+0000 mon.a (mon.0) 500 : cluster [INF] Manager daemon a is now available 2026-03-10T13:46:33.089 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:46:32 vm08 bash[23387]: audit 2026-03-10T13:46:32.679723+0000 mon.a (mon.0) 501 : audit [INF] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' 2026-03-10T13:46:33.089 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:46:32 vm08 bash[23387]: audit 2026-03-10T13:46:32.679723+0000 mon.a (mon.0) 501 : audit [INF] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' 2026-03-10T13:46:33.967 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:46:33 vm00 bash[20748]: audit 2026-03-10T13:46:32.685631+0000 mon.b (mon.2) 12 : audit [DBG] from='mgr.? 192.168.123.107:0/2116809014' entity='mgr.b' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/b/crt"}]: dispatch 2026-03-10T13:46:33.967 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:46:33 vm00 bash[20748]: audit 2026-03-10T13:46:32.685631+0000 mon.b (mon.2) 12 : audit [DBG] from='mgr.? 192.168.123.107:0/2116809014' entity='mgr.b' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/b/crt"}]: dispatch 2026-03-10T13:46:33.967 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:46:33 vm00 bash[20748]: audit 2026-03-10T13:46:32.692551+0000 mon.b (mon.2) 13 : audit [DBG] from='mgr.? 192.168.123.107:0/2116809014' entity='mgr.b' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/crt"}]: dispatch 2026-03-10T13:46:33.967 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:46:33 vm00 bash[20748]: audit 2026-03-10T13:46:32.692551+0000 mon.b (mon.2) 13 : audit [DBG] from='mgr.? 192.168.123.107:0/2116809014' entity='mgr.b' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/crt"}]: dispatch 2026-03-10T13:46:33.967 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:46:33 vm00 bash[20748]: audit 2026-03-10T13:46:32.693021+0000 mon.b (mon.2) 14 : audit [DBG] from='mgr.? 192.168.123.107:0/2116809014' entity='mgr.b' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/b/key"}]: dispatch 2026-03-10T13:46:33.967 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:46:33 vm00 bash[20748]: audit 2026-03-10T13:46:32.693021+0000 mon.b (mon.2) 14 : audit [DBG] from='mgr.? 192.168.123.107:0/2116809014' entity='mgr.b' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/b/key"}]: dispatch 2026-03-10T13:46:33.967 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:46:33 vm00 bash[20748]: audit 2026-03-10T13:46:32.693244+0000 mon.b (mon.2) 15 : audit [DBG] from='mgr.? 192.168.123.107:0/2116809014' entity='mgr.b' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/key"}]: dispatch 2026-03-10T13:46:33.967 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:46:33 vm00 bash[20748]: audit 2026-03-10T13:46:32.693244+0000 mon.b (mon.2) 15 : audit [DBG] from='mgr.? 192.168.123.107:0/2116809014' entity='mgr.b' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/key"}]: dispatch 2026-03-10T13:46:33.967 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:46:33 vm00 bash[20748]: cluster 2026-03-10T13:46:32.694305+0000 mon.a (mon.0) 502 : cluster [DBG] Standby manager daemon b restarted 2026-03-10T13:46:33.967 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:46:33 vm00 bash[20748]: cluster 2026-03-10T13:46:32.694305+0000 mon.a (mon.0) 502 : cluster [DBG] Standby manager daemon b restarted 2026-03-10T13:46:33.967 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:46:33 vm00 bash[20748]: cluster 2026-03-10T13:46:32.694373+0000 mon.a (mon.0) 503 : cluster [DBG] Standby manager daemon b started 2026-03-10T13:46:33.967 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:46:33 vm00 bash[20748]: cluster 2026-03-10T13:46:32.694373+0000 mon.a (mon.0) 503 : cluster [DBG] Standby manager daemon b started 2026-03-10T13:46:33.967 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:46:33 vm00 bash[20748]: audit 2026-03-10T13:46:32.695306+0000 mon.a (mon.0) 504 : audit [DBG] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T13:46:33.967 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:46:33 vm00 bash[20748]: audit 2026-03-10T13:46:32.695306+0000 mon.a (mon.0) 504 : audit [DBG] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T13:46:33.967 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:46:33 vm00 bash[20748]: audit 2026-03-10T13:46:32.695647+0000 mon.a (mon.0) 505 : audit [DBG] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T13:46:33.967 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:46:33 vm00 bash[20748]: audit 2026-03-10T13:46:32.695647+0000 mon.a (mon.0) 505 : audit [DBG] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T13:46:33.967 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:46:33 vm00 bash[20748]: audit 2026-03-10T13:46:32.703757+0000 mon.a (mon.0) 506 : audit [INF] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/a/mirror_snapshot_schedule"}]: dispatch 2026-03-10T13:46:33.967 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:46:33 vm00 bash[20748]: audit 2026-03-10T13:46:32.703757+0000 mon.a (mon.0) 506 : audit [INF] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/a/mirror_snapshot_schedule"}]: dispatch 2026-03-10T13:46:33.967 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:46:33 vm00 bash[20748]: audit 2026-03-10T13:46:32.720058+0000 mon.a (mon.0) 507 : audit [INF] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/a/trash_purge_schedule"}]: dispatch 2026-03-10T13:46:33.967 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:46:33 vm00 bash[20748]: audit 2026-03-10T13:46:32.720058+0000 mon.a (mon.0) 507 : audit [INF] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/a/trash_purge_schedule"}]: dispatch 2026-03-10T13:46:33.967 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:46:33 vm00 bash[20748]: cluster 2026-03-10T13:46:33.659437+0000 mon.a (mon.0) 508 : cluster [DBG] mgrmap e17: a(active, since 1.02768s), standbys: b 2026-03-10T13:46:33.967 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:46:33 vm00 bash[20748]: cluster 2026-03-10T13:46:33.659437+0000 mon.a (mon.0) 508 : cluster [DBG] mgrmap e17: a(active, since 1.02768s), standbys: b 2026-03-10T13:46:33.998 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:46:33 vm07 bash[23044]: audit 2026-03-10T13:46:32.685631+0000 mon.b (mon.2) 12 : audit [DBG] from='mgr.? 192.168.123.107:0/2116809014' entity='mgr.b' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/b/crt"}]: dispatch 2026-03-10T13:46:33.998 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:46:33 vm07 bash[23044]: audit 2026-03-10T13:46:32.685631+0000 mon.b (mon.2) 12 : audit [DBG] from='mgr.? 192.168.123.107:0/2116809014' entity='mgr.b' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/b/crt"}]: dispatch 2026-03-10T13:46:33.999 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:46:33 vm07 bash[23044]: audit 2026-03-10T13:46:32.692551+0000 mon.b (mon.2) 13 : audit [DBG] from='mgr.? 192.168.123.107:0/2116809014' entity='mgr.b' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/crt"}]: dispatch 2026-03-10T13:46:33.999 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:46:33 vm07 bash[23044]: audit 2026-03-10T13:46:32.692551+0000 mon.b (mon.2) 13 : audit [DBG] from='mgr.? 192.168.123.107:0/2116809014' entity='mgr.b' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/crt"}]: dispatch 2026-03-10T13:46:33.999 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:46:33 vm07 bash[23044]: audit 2026-03-10T13:46:32.693021+0000 mon.b (mon.2) 14 : audit [DBG] from='mgr.? 192.168.123.107:0/2116809014' entity='mgr.b' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/b/key"}]: dispatch 2026-03-10T13:46:33.999 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:46:33 vm07 bash[23044]: audit 2026-03-10T13:46:32.693021+0000 mon.b (mon.2) 14 : audit [DBG] from='mgr.? 192.168.123.107:0/2116809014' entity='mgr.b' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/b/key"}]: dispatch 2026-03-10T13:46:33.999 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:46:33 vm07 bash[23044]: audit 2026-03-10T13:46:32.693244+0000 mon.b (mon.2) 15 : audit [DBG] from='mgr.? 192.168.123.107:0/2116809014' entity='mgr.b' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/key"}]: dispatch 2026-03-10T13:46:33.999 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:46:33 vm07 bash[23044]: audit 2026-03-10T13:46:32.693244+0000 mon.b (mon.2) 15 : audit [DBG] from='mgr.? 192.168.123.107:0/2116809014' entity='mgr.b' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/key"}]: dispatch 2026-03-10T13:46:33.999 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:46:33 vm07 bash[23044]: cluster 2026-03-10T13:46:32.694305+0000 mon.a (mon.0) 502 : cluster [DBG] Standby manager daemon b restarted 2026-03-10T13:46:33.999 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:46:33 vm07 bash[23044]: cluster 2026-03-10T13:46:32.694305+0000 mon.a (mon.0) 502 : cluster [DBG] Standby manager daemon b restarted 2026-03-10T13:46:33.999 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:46:33 vm07 bash[23044]: cluster 2026-03-10T13:46:32.694373+0000 mon.a (mon.0) 503 : cluster [DBG] Standby manager daemon b started 2026-03-10T13:46:33.999 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:46:33 vm07 bash[23044]: cluster 2026-03-10T13:46:32.694373+0000 mon.a (mon.0) 503 : cluster [DBG] Standby manager daemon b started 2026-03-10T13:46:33.999 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:46:33 vm07 bash[23044]: audit 2026-03-10T13:46:32.695306+0000 mon.a (mon.0) 504 : audit [DBG] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T13:46:33.999 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:46:33 vm07 bash[23044]: audit 2026-03-10T13:46:32.695306+0000 mon.a (mon.0) 504 : audit [DBG] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T13:46:33.999 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:46:33 vm07 bash[23044]: audit 2026-03-10T13:46:32.695647+0000 mon.a (mon.0) 505 : audit [DBG] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T13:46:33.999 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:46:33 vm07 bash[23044]: audit 2026-03-10T13:46:32.695647+0000 mon.a (mon.0) 505 : audit [DBG] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T13:46:33.999 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:46:33 vm07 bash[23044]: audit 2026-03-10T13:46:32.703757+0000 mon.a (mon.0) 506 : audit [INF] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/a/mirror_snapshot_schedule"}]: dispatch 2026-03-10T13:46:33.999 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:46:33 vm07 bash[23044]: audit 2026-03-10T13:46:32.703757+0000 mon.a (mon.0) 506 : audit [INF] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/a/mirror_snapshot_schedule"}]: dispatch 2026-03-10T13:46:33.999 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:46:33 vm07 bash[23044]: audit 2026-03-10T13:46:32.720058+0000 mon.a (mon.0) 507 : audit [INF] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/a/trash_purge_schedule"}]: dispatch 2026-03-10T13:46:33.999 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:46:33 vm07 bash[23044]: audit 2026-03-10T13:46:32.720058+0000 mon.a (mon.0) 507 : audit [INF] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/a/trash_purge_schedule"}]: dispatch 2026-03-10T13:46:33.999 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:46:33 vm07 bash[23044]: cluster 2026-03-10T13:46:33.659437+0000 mon.a (mon.0) 508 : cluster [DBG] mgrmap e17: a(active, since 1.02768s), standbys: b 2026-03-10T13:46:33.999 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:46:33 vm07 bash[23044]: cluster 2026-03-10T13:46:33.659437+0000 mon.a (mon.0) 508 : cluster [DBG] mgrmap e17: a(active, since 1.02768s), standbys: b 2026-03-10T13:46:34.088 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:46:33 vm08 bash[23387]: audit 2026-03-10T13:46:32.685631+0000 mon.b (mon.2) 12 : audit [DBG] from='mgr.? 192.168.123.107:0/2116809014' entity='mgr.b' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/b/crt"}]: dispatch 2026-03-10T13:46:34.088 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:46:33 vm08 bash[23387]: audit 2026-03-10T13:46:32.685631+0000 mon.b (mon.2) 12 : audit [DBG] from='mgr.? 192.168.123.107:0/2116809014' entity='mgr.b' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/b/crt"}]: dispatch 2026-03-10T13:46:34.088 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:46:33 vm08 bash[23387]: audit 2026-03-10T13:46:32.692551+0000 mon.b (mon.2) 13 : audit [DBG] from='mgr.? 192.168.123.107:0/2116809014' entity='mgr.b' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/crt"}]: dispatch 2026-03-10T13:46:34.088 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:46:33 vm08 bash[23387]: audit 2026-03-10T13:46:32.692551+0000 mon.b (mon.2) 13 : audit [DBG] from='mgr.? 192.168.123.107:0/2116809014' entity='mgr.b' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/crt"}]: dispatch 2026-03-10T13:46:34.088 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:46:33 vm08 bash[23387]: audit 2026-03-10T13:46:32.693021+0000 mon.b (mon.2) 14 : audit [DBG] from='mgr.? 192.168.123.107:0/2116809014' entity='mgr.b' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/b/key"}]: dispatch 2026-03-10T13:46:34.088 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:46:33 vm08 bash[23387]: audit 2026-03-10T13:46:32.693021+0000 mon.b (mon.2) 14 : audit [DBG] from='mgr.? 192.168.123.107:0/2116809014' entity='mgr.b' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/b/key"}]: dispatch 2026-03-10T13:46:34.088 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:46:33 vm08 bash[23387]: audit 2026-03-10T13:46:32.693244+0000 mon.b (mon.2) 15 : audit [DBG] from='mgr.? 192.168.123.107:0/2116809014' entity='mgr.b' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/key"}]: dispatch 2026-03-10T13:46:34.088 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:46:33 vm08 bash[23387]: audit 2026-03-10T13:46:32.693244+0000 mon.b (mon.2) 15 : audit [DBG] from='mgr.? 192.168.123.107:0/2116809014' entity='mgr.b' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/key"}]: dispatch 2026-03-10T13:46:34.088 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:46:33 vm08 bash[23387]: cluster 2026-03-10T13:46:32.694305+0000 mon.a (mon.0) 502 : cluster [DBG] Standby manager daemon b restarted 2026-03-10T13:46:34.088 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:46:33 vm08 bash[23387]: cluster 2026-03-10T13:46:32.694305+0000 mon.a (mon.0) 502 : cluster [DBG] Standby manager daemon b restarted 2026-03-10T13:46:34.088 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:46:33 vm08 bash[23387]: cluster 2026-03-10T13:46:32.694373+0000 mon.a (mon.0) 503 : cluster [DBG] Standby manager daemon b started 2026-03-10T13:46:34.088 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:46:33 vm08 bash[23387]: cluster 2026-03-10T13:46:32.694373+0000 mon.a (mon.0) 503 : cluster [DBG] Standby manager daemon b started 2026-03-10T13:46:34.088 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:46:33 vm08 bash[23387]: audit 2026-03-10T13:46:32.695306+0000 mon.a (mon.0) 504 : audit [DBG] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T13:46:34.088 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:46:33 vm08 bash[23387]: audit 2026-03-10T13:46:32.695306+0000 mon.a (mon.0) 504 : audit [DBG] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T13:46:34.088 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:46:33 vm08 bash[23387]: audit 2026-03-10T13:46:32.695647+0000 mon.a (mon.0) 505 : audit [DBG] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T13:46:34.088 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:46:33 vm08 bash[23387]: audit 2026-03-10T13:46:32.695647+0000 mon.a (mon.0) 505 : audit [DBG] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T13:46:34.088 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:46:33 vm08 bash[23387]: audit 2026-03-10T13:46:32.703757+0000 mon.a (mon.0) 506 : audit [INF] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/a/mirror_snapshot_schedule"}]: dispatch 2026-03-10T13:46:34.088 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:46:33 vm08 bash[23387]: audit 2026-03-10T13:46:32.703757+0000 mon.a (mon.0) 506 : audit [INF] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/a/mirror_snapshot_schedule"}]: dispatch 2026-03-10T13:46:34.088 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:46:33 vm08 bash[23387]: audit 2026-03-10T13:46:32.720058+0000 mon.a (mon.0) 507 : audit [INF] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/a/trash_purge_schedule"}]: dispatch 2026-03-10T13:46:34.088 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:46:33 vm08 bash[23387]: audit 2026-03-10T13:46:32.720058+0000 mon.a (mon.0) 507 : audit [INF] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/a/trash_purge_schedule"}]: dispatch 2026-03-10T13:46:34.088 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:46:33 vm08 bash[23387]: cluster 2026-03-10T13:46:33.659437+0000 mon.a (mon.0) 508 : cluster [DBG] mgrmap e17: a(active, since 1.02768s), standbys: b 2026-03-10T13:46:34.088 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:46:33 vm08 bash[23387]: cluster 2026-03-10T13:46:33.659437+0000 mon.a (mon.0) 508 : cluster [DBG] mgrmap e17: a(active, since 1.02768s), standbys: b 2026-03-10T13:46:34.967 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:46:34 vm00 bash[20748]: cephadm 2026-03-10T13:46:33.713874+0000 mgr.a (mgr.14388) 2 : cephadm [INF] [10/Mar/2026:13:46:33] ENGINE Bus STARTING 2026-03-10T13:46:34.967 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:46:34 vm00 bash[20748]: cephadm 2026-03-10T13:46:33.713874+0000 mgr.a (mgr.14388) 2 : cephadm [INF] [10/Mar/2026:13:46:33] ENGINE Bus STARTING 2026-03-10T13:46:34.967 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:46:34 vm00 bash[20748]: cephadm 2026-03-10T13:46:33.814921+0000 mgr.a (mgr.14388) 3 : cephadm [INF] [10/Mar/2026:13:46:33] ENGINE Serving on http://192.168.123.100:8765 2026-03-10T13:46:34.967 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:46:34 vm00 bash[20748]: cephadm 2026-03-10T13:46:33.814921+0000 mgr.a (mgr.14388) 3 : cephadm [INF] [10/Mar/2026:13:46:33] ENGINE Serving on http://192.168.123.100:8765 2026-03-10T13:46:34.967 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:46:34 vm00 bash[20748]: cephadm 2026-03-10T13:46:33.922560+0000 mgr.a (mgr.14388) 4 : cephadm [INF] [10/Mar/2026:13:46:33] ENGINE Serving on https://192.168.123.100:7150 2026-03-10T13:46:34.967 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:46:34 vm00 bash[20748]: cephadm 2026-03-10T13:46:33.922560+0000 mgr.a (mgr.14388) 4 : cephadm [INF] [10/Mar/2026:13:46:33] ENGINE Serving on https://192.168.123.100:7150 2026-03-10T13:46:34.967 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:46:34 vm00 bash[20748]: cephadm 2026-03-10T13:46:33.922616+0000 mgr.a (mgr.14388) 5 : cephadm [INF] [10/Mar/2026:13:46:33] ENGINE Bus STARTED 2026-03-10T13:46:34.967 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:46:34 vm00 bash[20748]: cephadm 2026-03-10T13:46:33.922616+0000 mgr.a (mgr.14388) 5 : cephadm [INF] [10/Mar/2026:13:46:33] ENGINE Bus STARTED 2026-03-10T13:46:34.967 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:46:34 vm00 bash[20748]: cephadm 2026-03-10T13:46:33.922945+0000 mgr.a (mgr.14388) 6 : cephadm [INF] [10/Mar/2026:13:46:33] ENGINE Client ('192.168.123.100', 42398) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)') 2026-03-10T13:46:34.967 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:46:34 vm00 bash[20748]: cephadm 2026-03-10T13:46:33.922945+0000 mgr.a (mgr.14388) 6 : cephadm [INF] [10/Mar/2026:13:46:33] ENGINE Client ('192.168.123.100', 42398) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)') 2026-03-10T13:46:34.998 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:46:34 vm07 bash[23044]: cephadm 2026-03-10T13:46:33.713874+0000 mgr.a (mgr.14388) 2 : cephadm [INF] [10/Mar/2026:13:46:33] ENGINE Bus STARTING 2026-03-10T13:46:34.998 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:46:34 vm07 bash[23044]: cephadm 2026-03-10T13:46:33.713874+0000 mgr.a (mgr.14388) 2 : cephadm [INF] [10/Mar/2026:13:46:33] ENGINE Bus STARTING 2026-03-10T13:46:34.998 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:46:34 vm07 bash[23044]: cephadm 2026-03-10T13:46:33.814921+0000 mgr.a (mgr.14388) 3 : cephadm [INF] [10/Mar/2026:13:46:33] ENGINE Serving on http://192.168.123.100:8765 2026-03-10T13:46:34.998 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:46:34 vm07 bash[23044]: cephadm 2026-03-10T13:46:33.814921+0000 mgr.a (mgr.14388) 3 : cephadm [INF] [10/Mar/2026:13:46:33] ENGINE Serving on http://192.168.123.100:8765 2026-03-10T13:46:34.998 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:46:34 vm07 bash[23044]: cephadm 2026-03-10T13:46:33.922560+0000 mgr.a (mgr.14388) 4 : cephadm [INF] [10/Mar/2026:13:46:33] ENGINE Serving on https://192.168.123.100:7150 2026-03-10T13:46:34.998 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:46:34 vm07 bash[23044]: cephadm 2026-03-10T13:46:33.922560+0000 mgr.a (mgr.14388) 4 : cephadm [INF] [10/Mar/2026:13:46:33] ENGINE Serving on https://192.168.123.100:7150 2026-03-10T13:46:34.998 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:46:34 vm07 bash[23044]: cephadm 2026-03-10T13:46:33.922616+0000 mgr.a (mgr.14388) 5 : cephadm [INF] [10/Mar/2026:13:46:33] ENGINE Bus STARTED 2026-03-10T13:46:34.998 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:46:34 vm07 bash[23044]: cephadm 2026-03-10T13:46:33.922616+0000 mgr.a (mgr.14388) 5 : cephadm [INF] [10/Mar/2026:13:46:33] ENGINE Bus STARTED 2026-03-10T13:46:34.998 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:46:34 vm07 bash[23044]: cephadm 2026-03-10T13:46:33.922945+0000 mgr.a (mgr.14388) 6 : cephadm [INF] [10/Mar/2026:13:46:33] ENGINE Client ('192.168.123.100', 42398) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)') 2026-03-10T13:46:34.998 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:46:34 vm07 bash[23044]: cephadm 2026-03-10T13:46:33.922945+0000 mgr.a (mgr.14388) 6 : cephadm [INF] [10/Mar/2026:13:46:33] ENGINE Client ('192.168.123.100', 42398) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)') 2026-03-10T13:46:35.088 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:46:34 vm08 bash[23387]: cephadm 2026-03-10T13:46:33.713874+0000 mgr.a (mgr.14388) 2 : cephadm [INF] [10/Mar/2026:13:46:33] ENGINE Bus STARTING 2026-03-10T13:46:35.088 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:46:34 vm08 bash[23387]: cephadm 2026-03-10T13:46:33.713874+0000 mgr.a (mgr.14388) 2 : cephadm [INF] [10/Mar/2026:13:46:33] ENGINE Bus STARTING 2026-03-10T13:46:35.088 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:46:34 vm08 bash[23387]: cephadm 2026-03-10T13:46:33.814921+0000 mgr.a (mgr.14388) 3 : cephadm [INF] [10/Mar/2026:13:46:33] ENGINE Serving on http://192.168.123.100:8765 2026-03-10T13:46:35.088 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:46:34 vm08 bash[23387]: cephadm 2026-03-10T13:46:33.814921+0000 mgr.a (mgr.14388) 3 : cephadm [INF] [10/Mar/2026:13:46:33] ENGINE Serving on http://192.168.123.100:8765 2026-03-10T13:46:35.088 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:46:34 vm08 bash[23387]: cephadm 2026-03-10T13:46:33.922560+0000 mgr.a (mgr.14388) 4 : cephadm [INF] [10/Mar/2026:13:46:33] ENGINE Serving on https://192.168.123.100:7150 2026-03-10T13:46:35.088 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:46:34 vm08 bash[23387]: cephadm 2026-03-10T13:46:33.922560+0000 mgr.a (mgr.14388) 4 : cephadm [INF] [10/Mar/2026:13:46:33] ENGINE Serving on https://192.168.123.100:7150 2026-03-10T13:46:35.088 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:46:34 vm08 bash[23387]: cephadm 2026-03-10T13:46:33.922616+0000 mgr.a (mgr.14388) 5 : cephadm [INF] [10/Mar/2026:13:46:33] ENGINE Bus STARTED 2026-03-10T13:46:35.088 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:46:34 vm08 bash[23387]: cephadm 2026-03-10T13:46:33.922616+0000 mgr.a (mgr.14388) 5 : cephadm [INF] [10/Mar/2026:13:46:33] ENGINE Bus STARTED 2026-03-10T13:46:35.088 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:46:34 vm08 bash[23387]: cephadm 2026-03-10T13:46:33.922945+0000 mgr.a (mgr.14388) 6 : cephadm [INF] [10/Mar/2026:13:46:33] ENGINE Client ('192.168.123.100', 42398) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)') 2026-03-10T13:46:35.088 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:46:34 vm08 bash[23387]: cephadm 2026-03-10T13:46:33.922945+0000 mgr.a (mgr.14388) 6 : cephadm [INF] [10/Mar/2026:13:46:33] ENGINE Client ('192.168.123.100', 42398) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)') 2026-03-10T13:46:35.967 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:46:35 vm00 bash[20748]: cluster 2026-03-10T13:46:34.650193+0000 mgr.a (mgr.14388) 7 : cluster [DBG] pgmap v4: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:46:35.967 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:46:35 vm00 bash[20748]: cluster 2026-03-10T13:46:34.650193+0000 mgr.a (mgr.14388) 7 : cluster [DBG] pgmap v4: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:46:35.967 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:46:35 vm00 bash[20748]: cluster 2026-03-10T13:46:34.703341+0000 mon.a (mon.0) 509 : cluster [DBG] mgrmap e18: a(active, since 2s), standbys: b 2026-03-10T13:46:35.967 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:46:35 vm00 bash[20748]: cluster 2026-03-10T13:46:34.703341+0000 mon.a (mon.0) 509 : cluster [DBG] mgrmap e18: a(active, since 2s), standbys: b 2026-03-10T13:46:35.998 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:46:35 vm07 bash[23044]: cluster 2026-03-10T13:46:34.650193+0000 mgr.a (mgr.14388) 7 : cluster [DBG] pgmap v4: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:46:35.998 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:46:35 vm07 bash[23044]: cluster 2026-03-10T13:46:34.650193+0000 mgr.a (mgr.14388) 7 : cluster [DBG] pgmap v4: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:46:35.998 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:46:35 vm07 bash[23044]: cluster 2026-03-10T13:46:34.703341+0000 mon.a (mon.0) 509 : cluster [DBG] mgrmap e18: a(active, since 2s), standbys: b 2026-03-10T13:46:35.998 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:46:35 vm07 bash[23044]: cluster 2026-03-10T13:46:34.703341+0000 mon.a (mon.0) 509 : cluster [DBG] mgrmap e18: a(active, since 2s), standbys: b 2026-03-10T13:46:36.088 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:46:35 vm08 bash[23387]: cluster 2026-03-10T13:46:34.650193+0000 mgr.a (mgr.14388) 7 : cluster [DBG] pgmap v4: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:46:36.088 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:46:35 vm08 bash[23387]: cluster 2026-03-10T13:46:34.650193+0000 mgr.a (mgr.14388) 7 : cluster [DBG] pgmap v4: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:46:36.088 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:46:35 vm08 bash[23387]: cluster 2026-03-10T13:46:34.703341+0000 mon.a (mon.0) 509 : cluster [DBG] mgrmap e18: a(active, since 2s), standbys: b 2026-03-10T13:46:36.088 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:46:35 vm08 bash[23387]: cluster 2026-03-10T13:46:34.703341+0000 mon.a (mon.0) 509 : cluster [DBG] mgrmap e18: a(active, since 2s), standbys: b 2026-03-10T13:46:36.967 INFO:journalctl@ceph.mgr.a.vm00.stdout:Mar 10 13:46:36 vm00 bash[21015]: ::ffff:192.168.123.107 - - [10/Mar/2026:13:46:36] "GET /metrics HTTP/1.1" 200 20061 "" "Prometheus/2.51.0" 2026-03-10T13:46:37.967 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:46:37 vm00 bash[20748]: cluster 2026-03-10T13:46:36.650457+0000 mgr.a (mgr.14388) 8 : cluster [DBG] pgmap v5: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:46:37.967 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:46:37 vm00 bash[20748]: cluster 2026-03-10T13:46:36.650457+0000 mgr.a (mgr.14388) 8 : cluster [DBG] pgmap v5: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:46:37.967 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:46:37 vm00 bash[20748]: cluster 2026-03-10T13:46:36.715627+0000 mon.a (mon.0) 510 : cluster [DBG] mgrmap e19: a(active, since 4s), standbys: b 2026-03-10T13:46:37.967 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:46:37 vm00 bash[20748]: cluster 2026-03-10T13:46:36.715627+0000 mon.a (mon.0) 510 : cluster [DBG] mgrmap e19: a(active, since 4s), standbys: b 2026-03-10T13:46:37.998 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:46:37 vm07 bash[23044]: cluster 2026-03-10T13:46:36.650457+0000 mgr.a (mgr.14388) 8 : cluster [DBG] pgmap v5: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:46:37.998 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:46:37 vm07 bash[23044]: cluster 2026-03-10T13:46:36.650457+0000 mgr.a (mgr.14388) 8 : cluster [DBG] pgmap v5: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:46:37.998 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:46:37 vm07 bash[23044]: cluster 2026-03-10T13:46:36.715627+0000 mon.a (mon.0) 510 : cluster [DBG] mgrmap e19: a(active, since 4s), standbys: b 2026-03-10T13:46:37.998 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:46:37 vm07 bash[23044]: cluster 2026-03-10T13:46:36.715627+0000 mon.a (mon.0) 510 : cluster [DBG] mgrmap e19: a(active, since 4s), standbys: b 2026-03-10T13:46:38.088 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:46:37 vm08 bash[23387]: cluster 2026-03-10T13:46:36.650457+0000 mgr.a (mgr.14388) 8 : cluster [DBG] pgmap v5: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:46:38.088 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:46:37 vm08 bash[23387]: cluster 2026-03-10T13:46:36.650457+0000 mgr.a (mgr.14388) 8 : cluster [DBG] pgmap v5: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:46:38.088 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:46:37 vm08 bash[23387]: cluster 2026-03-10T13:46:36.715627+0000 mon.a (mon.0) 510 : cluster [DBG] mgrmap e19: a(active, since 4s), standbys: b 2026-03-10T13:46:38.088 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:46:37 vm08 bash[23387]: cluster 2026-03-10T13:46:36.715627+0000 mon.a (mon.0) 510 : cluster [DBG] mgrmap e19: a(active, since 4s), standbys: b 2026-03-10T13:46:39.318 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:46:39 vm00 bash[20748]: audit 2026-03-10T13:46:38.060695+0000 mon.a (mon.0) 511 : audit [INF] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' 2026-03-10T13:46:39.318 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:46:39 vm00 bash[20748]: audit 2026-03-10T13:46:38.060695+0000 mon.a (mon.0) 511 : audit [INF] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' 2026-03-10T13:46:39.318 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:46:39 vm00 bash[20748]: audit 2026-03-10T13:46:38.065834+0000 mon.a (mon.0) 512 : audit [INF] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' 2026-03-10T13:46:39.318 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:46:39 vm00 bash[20748]: audit 2026-03-10T13:46:38.065834+0000 mon.a (mon.0) 512 : audit [INF] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' 2026-03-10T13:46:39.318 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:46:39 vm00 bash[20748]: audit 2026-03-10T13:46:38.080147+0000 mon.a (mon.0) 513 : audit [INF] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' 2026-03-10T13:46:39.318 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:46:39 vm00 bash[20748]: audit 2026-03-10T13:46:38.080147+0000 mon.a (mon.0) 513 : audit [INF] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' 2026-03-10T13:46:39.318 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:46:39 vm00 bash[20748]: audit 2026-03-10T13:46:38.084978+0000 mon.a (mon.0) 514 : audit [INF] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' 2026-03-10T13:46:39.318 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:46:39 vm00 bash[20748]: audit 2026-03-10T13:46:38.084978+0000 mon.a (mon.0) 514 : audit [INF] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' 2026-03-10T13:46:39.318 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:46:39 vm00 bash[20748]: audit 2026-03-10T13:46:38.227234+0000 mon.a (mon.0) 515 : audit [INF] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' 2026-03-10T13:46:39.318 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:46:39 vm00 bash[20748]: audit 2026-03-10T13:46:38.227234+0000 mon.a (mon.0) 515 : audit [INF] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' 2026-03-10T13:46:39.318 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:46:39 vm00 bash[20748]: audit 2026-03-10T13:46:38.231716+0000 mon.a (mon.0) 516 : audit [INF] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' 2026-03-10T13:46:39.318 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:46:39 vm00 bash[20748]: audit 2026-03-10T13:46:38.231716+0000 mon.a (mon.0) 516 : audit [INF] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' 2026-03-10T13:46:39.318 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:46:39 vm00 bash[20748]: audit 2026-03-10T13:46:38.617264+0000 mon.a (mon.0) 517 : audit [INF] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' 2026-03-10T13:46:39.318 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:46:39 vm00 bash[20748]: audit 2026-03-10T13:46:38.617264+0000 mon.a (mon.0) 517 : audit [INF] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' 2026-03-10T13:46:39.318 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:46:39 vm00 bash[20748]: audit 2026-03-10T13:46:38.621954+0000 mon.a (mon.0) 518 : audit [INF] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' 2026-03-10T13:46:39.318 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:46:39 vm00 bash[20748]: audit 2026-03-10T13:46:38.621954+0000 mon.a (mon.0) 518 : audit [INF] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' 2026-03-10T13:46:39.318 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:46:39 vm00 bash[20748]: audit 2026-03-10T13:46:38.622903+0000 mon.a (mon.0) 519 : audit [INF] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' cmd=[{"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"}]: dispatch 2026-03-10T13:46:39.318 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:46:39 vm00 bash[20748]: audit 2026-03-10T13:46:38.622903+0000 mon.a (mon.0) 519 : audit [INF] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' cmd=[{"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"}]: dispatch 2026-03-10T13:46:39.319 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:46:39 vm00 bash[20748]: cephadm 2026-03-10T13:46:38.623393+0000 mgr.a (mgr.14388) 9 : cephadm [INF] Adjusting osd_memory_target on vm08 to 2503M 2026-03-10T13:46:39.319 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:46:39 vm00 bash[20748]: cephadm 2026-03-10T13:46:38.623393+0000 mgr.a (mgr.14388) 9 : cephadm [INF] Adjusting osd_memory_target on vm08 to 2503M 2026-03-10T13:46:39.319 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:46:39 vm00 bash[20748]: audit 2026-03-10T13:46:38.626417+0000 mon.a (mon.0) 520 : audit [INF] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' 2026-03-10T13:46:39.319 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:46:39 vm00 bash[20748]: audit 2026-03-10T13:46:38.626417+0000 mon.a (mon.0) 520 : audit [INF] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' 2026-03-10T13:46:39.319 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:46:39 vm00 bash[20748]: audit 2026-03-10T13:46:38.643350+0000 mon.a (mon.0) 521 : audit [INF] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' 2026-03-10T13:46:39.319 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:46:39 vm00 bash[20748]: audit 2026-03-10T13:46:38.643350+0000 mon.a (mon.0) 521 : audit [INF] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' 2026-03-10T13:46:39.319 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:46:39 vm00 bash[20748]: audit 2026-03-10T13:46:38.646955+0000 mon.a (mon.0) 522 : audit [INF] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' 2026-03-10T13:46:39.319 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:46:39 vm00 bash[20748]: audit 2026-03-10T13:46:38.646955+0000 mon.a (mon.0) 522 : audit [INF] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' 2026-03-10T13:46:39.319 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:46:39 vm00 bash[20748]: audit 2026-03-10T13:46:38.647754+0000 mon.a (mon.0) 523 : audit [INF] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' cmd=[{"prefix": "config rm", "who": "osd/host:vm07", "name": "osd_memory_target"}]: dispatch 2026-03-10T13:46:39.319 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:46:39 vm00 bash[20748]: audit 2026-03-10T13:46:38.647754+0000 mon.a (mon.0) 523 : audit [INF] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' cmd=[{"prefix": "config rm", "who": "osd/host:vm07", "name": "osd_memory_target"}]: dispatch 2026-03-10T13:46:39.319 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:46:39 vm00 bash[20748]: audit 2026-03-10T13:46:38.793275+0000 mon.a (mon.0) 524 : audit [INF] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' 2026-03-10T13:46:39.319 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:46:39 vm00 bash[20748]: audit 2026-03-10T13:46:38.793275+0000 mon.a (mon.0) 524 : audit [INF] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' 2026-03-10T13:46:39.319 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:46:39 vm00 bash[20748]: audit 2026-03-10T13:46:38.797001+0000 mon.a (mon.0) 525 : audit [INF] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' 2026-03-10T13:46:39.319 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:46:39 vm00 bash[20748]: audit 2026-03-10T13:46:38.797001+0000 mon.a (mon.0) 525 : audit [INF] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' 2026-03-10T13:46:39.319 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:46:39 vm00 bash[20748]: audit 2026-03-10T13:46:38.797690+0000 mon.a (mon.0) 526 : audit [INF] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' cmd=[{"prefix": "config rm", "who": "osd/host:vm00", "name": "osd_memory_target"}]: dispatch 2026-03-10T13:46:39.319 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:46:39 vm00 bash[20748]: audit 2026-03-10T13:46:38.797690+0000 mon.a (mon.0) 526 : audit [INF] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' cmd=[{"prefix": "config rm", "who": "osd/host:vm00", "name": "osd_memory_target"}]: dispatch 2026-03-10T13:46:39.319 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:46:39 vm00 bash[20748]: audit 2026-03-10T13:46:38.798330+0000 mon.a (mon.0) 527 : audit [DBG] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T13:46:39.319 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:46:39 vm00 bash[20748]: audit 2026-03-10T13:46:38.798330+0000 mon.a (mon.0) 527 : audit [DBG] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T13:46:39.319 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:46:39 vm00 bash[20748]: audit 2026-03-10T13:46:38.798712+0000 mon.a (mon.0) 528 : audit [INF] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T13:46:39.319 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:46:39 vm00 bash[20748]: audit 2026-03-10T13:46:38.798712+0000 mon.a (mon.0) 528 : audit [INF] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T13:46:39.319 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:46:39 vm00 bash[20748]: audit 2026-03-10T13:46:38.954702+0000 mon.a (mon.0) 529 : audit [INF] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' 2026-03-10T13:46:39.319 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:46:39 vm00 bash[20748]: audit 2026-03-10T13:46:38.954702+0000 mon.a (mon.0) 529 : audit [INF] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' 2026-03-10T13:46:39.319 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:46:39 vm00 bash[20748]: audit 2026-03-10T13:46:38.959094+0000 mon.a (mon.0) 530 : audit [INF] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' 2026-03-10T13:46:39.319 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:46:39 vm00 bash[20748]: audit 2026-03-10T13:46:38.959094+0000 mon.a (mon.0) 530 : audit [INF] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' 2026-03-10T13:46:39.319 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:46:39 vm00 bash[20748]: audit 2026-03-10T13:46:38.962677+0000 mon.a (mon.0) 531 : audit [INF] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' 2026-03-10T13:46:39.319 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:46:39 vm00 bash[20748]: audit 2026-03-10T13:46:38.962677+0000 mon.a (mon.0) 531 : audit [INF] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' 2026-03-10T13:46:39.319 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:46:39 vm00 bash[20748]: audit 2026-03-10T13:46:38.966395+0000 mon.a (mon.0) 532 : audit [INF] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' 2026-03-10T13:46:39.319 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:46:39 vm00 bash[20748]: audit 2026-03-10T13:46:38.966395+0000 mon.a (mon.0) 532 : audit [INF] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' 2026-03-10T13:46:39.319 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:46:39 vm00 bash[20748]: audit 2026-03-10T13:46:38.970158+0000 mon.a (mon.0) 533 : audit [INF] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' 2026-03-10T13:46:39.319 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:46:39 vm00 bash[20748]: audit 2026-03-10T13:46:38.970158+0000 mon.a (mon.0) 533 : audit [INF] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' 2026-03-10T13:46:39.319 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:46:39 vm00 bash[20748]: audit 2026-03-10T13:46:38.973560+0000 mon.a (mon.0) 534 : audit [INF] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' 2026-03-10T13:46:39.319 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:46:39 vm00 bash[20748]: audit 2026-03-10T13:46:38.973560+0000 mon.a (mon.0) 534 : audit [INF] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' 2026-03-10T13:46:39.319 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:46:39 vm00 bash[20748]: audit 2026-03-10T13:46:38.976693+0000 mon.a (mon.0) 535 : audit [INF] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' 2026-03-10T13:46:39.319 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:46:39 vm00 bash[20748]: audit 2026-03-10T13:46:38.976693+0000 mon.a (mon.0) 535 : audit [INF] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' 2026-03-10T13:46:39.319 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:46:39 vm00 bash[20748]: audit 2026-03-10T13:46:38.982276+0000 mon.a (mon.0) 536 : audit [INF] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' 2026-03-10T13:46:39.319 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:46:39 vm00 bash[20748]: audit 2026-03-10T13:46:38.982276+0000 mon.a (mon.0) 536 : audit [INF] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' 2026-03-10T13:46:39.338 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:46:39 vm08 bash[23387]: audit 2026-03-10T13:46:38.060695+0000 mon.a (mon.0) 511 : audit [INF] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' 2026-03-10T13:46:39.338 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:46:39 vm08 bash[23387]: audit 2026-03-10T13:46:38.060695+0000 mon.a (mon.0) 511 : audit [INF] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' 2026-03-10T13:46:39.338 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:46:39 vm08 bash[23387]: audit 2026-03-10T13:46:38.065834+0000 mon.a (mon.0) 512 : audit [INF] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' 2026-03-10T13:46:39.338 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:46:39 vm08 bash[23387]: audit 2026-03-10T13:46:38.065834+0000 mon.a (mon.0) 512 : audit [INF] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' 2026-03-10T13:46:39.338 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:46:39 vm08 bash[23387]: audit 2026-03-10T13:46:38.080147+0000 mon.a (mon.0) 513 : audit [INF] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' 2026-03-10T13:46:39.338 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:46:39 vm08 bash[23387]: audit 2026-03-10T13:46:38.080147+0000 mon.a (mon.0) 513 : audit [INF] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' 2026-03-10T13:46:39.338 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:46:39 vm08 bash[23387]: audit 2026-03-10T13:46:38.084978+0000 mon.a (mon.0) 514 : audit [INF] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' 2026-03-10T13:46:39.338 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:46:39 vm08 bash[23387]: audit 2026-03-10T13:46:38.084978+0000 mon.a (mon.0) 514 : audit [INF] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' 2026-03-10T13:46:39.338 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:46:39 vm08 bash[23387]: audit 2026-03-10T13:46:38.227234+0000 mon.a (mon.0) 515 : audit [INF] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' 2026-03-10T13:46:39.338 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:46:39 vm08 bash[23387]: audit 2026-03-10T13:46:38.227234+0000 mon.a (mon.0) 515 : audit [INF] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' 2026-03-10T13:46:39.338 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:46:39 vm08 bash[23387]: audit 2026-03-10T13:46:38.231716+0000 mon.a (mon.0) 516 : audit [INF] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' 2026-03-10T13:46:39.338 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:46:39 vm08 bash[23387]: audit 2026-03-10T13:46:38.231716+0000 mon.a (mon.0) 516 : audit [INF] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' 2026-03-10T13:46:39.338 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:46:39 vm08 bash[23387]: audit 2026-03-10T13:46:38.617264+0000 mon.a (mon.0) 517 : audit [INF] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' 2026-03-10T13:46:39.338 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:46:39 vm08 bash[23387]: audit 2026-03-10T13:46:38.617264+0000 mon.a (mon.0) 517 : audit [INF] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' 2026-03-10T13:46:39.338 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:46:39 vm08 bash[23387]: audit 2026-03-10T13:46:38.621954+0000 mon.a (mon.0) 518 : audit [INF] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' 2026-03-10T13:46:39.338 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:46:39 vm08 bash[23387]: audit 2026-03-10T13:46:38.621954+0000 mon.a (mon.0) 518 : audit [INF] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' 2026-03-10T13:46:39.338 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:46:39 vm08 bash[23387]: audit 2026-03-10T13:46:38.622903+0000 mon.a (mon.0) 519 : audit [INF] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' cmd=[{"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"}]: dispatch 2026-03-10T13:46:39.338 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:46:39 vm08 bash[23387]: audit 2026-03-10T13:46:38.622903+0000 mon.a (mon.0) 519 : audit [INF] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' cmd=[{"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"}]: dispatch 2026-03-10T13:46:39.338 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:46:39 vm08 bash[23387]: cephadm 2026-03-10T13:46:38.623393+0000 mgr.a (mgr.14388) 9 : cephadm [INF] Adjusting osd_memory_target on vm08 to 2503M 2026-03-10T13:46:39.338 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:46:39 vm08 bash[23387]: cephadm 2026-03-10T13:46:38.623393+0000 mgr.a (mgr.14388) 9 : cephadm [INF] Adjusting osd_memory_target on vm08 to 2503M 2026-03-10T13:46:39.338 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:46:39 vm08 bash[23387]: audit 2026-03-10T13:46:38.626417+0000 mon.a (mon.0) 520 : audit [INF] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' 2026-03-10T13:46:39.338 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:46:39 vm08 bash[23387]: audit 2026-03-10T13:46:38.626417+0000 mon.a (mon.0) 520 : audit [INF] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' 2026-03-10T13:46:39.338 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:46:39 vm08 bash[23387]: audit 2026-03-10T13:46:38.643350+0000 mon.a (mon.0) 521 : audit [INF] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' 2026-03-10T13:46:39.338 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:46:39 vm08 bash[23387]: audit 2026-03-10T13:46:38.643350+0000 mon.a (mon.0) 521 : audit [INF] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' 2026-03-10T13:46:39.338 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:46:39 vm08 bash[23387]: audit 2026-03-10T13:46:38.646955+0000 mon.a (mon.0) 522 : audit [INF] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' 2026-03-10T13:46:39.339 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:46:39 vm08 bash[23387]: audit 2026-03-10T13:46:38.646955+0000 mon.a (mon.0) 522 : audit [INF] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' 2026-03-10T13:46:39.339 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:46:39 vm08 bash[23387]: audit 2026-03-10T13:46:38.647754+0000 mon.a (mon.0) 523 : audit [INF] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' cmd=[{"prefix": "config rm", "who": "osd/host:vm07", "name": "osd_memory_target"}]: dispatch 2026-03-10T13:46:39.339 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:46:39 vm08 bash[23387]: audit 2026-03-10T13:46:38.647754+0000 mon.a (mon.0) 523 : audit [INF] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' cmd=[{"prefix": "config rm", "who": "osd/host:vm07", "name": "osd_memory_target"}]: dispatch 2026-03-10T13:46:39.339 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:46:39 vm08 bash[23387]: audit 2026-03-10T13:46:38.793275+0000 mon.a (mon.0) 524 : audit [INF] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' 2026-03-10T13:46:39.339 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:46:39 vm08 bash[23387]: audit 2026-03-10T13:46:38.793275+0000 mon.a (mon.0) 524 : audit [INF] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' 2026-03-10T13:46:39.339 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:46:39 vm08 bash[23387]: audit 2026-03-10T13:46:38.797001+0000 mon.a (mon.0) 525 : audit [INF] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' 2026-03-10T13:46:39.339 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:46:39 vm08 bash[23387]: audit 2026-03-10T13:46:38.797001+0000 mon.a (mon.0) 525 : audit [INF] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' 2026-03-10T13:46:39.339 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:46:39 vm08 bash[23387]: audit 2026-03-10T13:46:38.797690+0000 mon.a (mon.0) 526 : audit [INF] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' cmd=[{"prefix": "config rm", "who": "osd/host:vm00", "name": "osd_memory_target"}]: dispatch 2026-03-10T13:46:39.339 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:46:39 vm08 bash[23387]: audit 2026-03-10T13:46:38.797690+0000 mon.a (mon.0) 526 : audit [INF] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' cmd=[{"prefix": "config rm", "who": "osd/host:vm00", "name": "osd_memory_target"}]: dispatch 2026-03-10T13:46:39.339 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:46:39 vm08 bash[23387]: audit 2026-03-10T13:46:38.798330+0000 mon.a (mon.0) 527 : audit [DBG] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T13:46:39.339 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:46:39 vm08 bash[23387]: audit 2026-03-10T13:46:38.798330+0000 mon.a (mon.0) 527 : audit [DBG] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T13:46:39.339 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:46:39 vm08 bash[23387]: audit 2026-03-10T13:46:38.798712+0000 mon.a (mon.0) 528 : audit [INF] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T13:46:39.339 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:46:39 vm08 bash[23387]: audit 2026-03-10T13:46:38.798712+0000 mon.a (mon.0) 528 : audit [INF] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T13:46:39.339 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:46:39 vm08 bash[23387]: audit 2026-03-10T13:46:38.954702+0000 mon.a (mon.0) 529 : audit [INF] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' 2026-03-10T13:46:39.339 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:46:39 vm08 bash[23387]: audit 2026-03-10T13:46:38.954702+0000 mon.a (mon.0) 529 : audit [INF] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' 2026-03-10T13:46:39.339 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:46:39 vm08 bash[23387]: audit 2026-03-10T13:46:38.959094+0000 mon.a (mon.0) 530 : audit [INF] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' 2026-03-10T13:46:39.339 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:46:39 vm08 bash[23387]: audit 2026-03-10T13:46:38.959094+0000 mon.a (mon.0) 530 : audit [INF] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' 2026-03-10T13:46:39.339 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:46:39 vm08 bash[23387]: audit 2026-03-10T13:46:38.962677+0000 mon.a (mon.0) 531 : audit [INF] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' 2026-03-10T13:46:39.339 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:46:39 vm08 bash[23387]: audit 2026-03-10T13:46:38.962677+0000 mon.a (mon.0) 531 : audit [INF] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' 2026-03-10T13:46:39.339 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:46:39 vm08 bash[23387]: audit 2026-03-10T13:46:38.966395+0000 mon.a (mon.0) 532 : audit [INF] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' 2026-03-10T13:46:39.339 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:46:39 vm08 bash[23387]: audit 2026-03-10T13:46:38.966395+0000 mon.a (mon.0) 532 : audit [INF] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' 2026-03-10T13:46:39.339 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:46:39 vm08 bash[23387]: audit 2026-03-10T13:46:38.970158+0000 mon.a (mon.0) 533 : audit [INF] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' 2026-03-10T13:46:39.339 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:46:39 vm08 bash[23387]: audit 2026-03-10T13:46:38.970158+0000 mon.a (mon.0) 533 : audit [INF] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' 2026-03-10T13:46:39.339 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:46:39 vm08 bash[23387]: audit 2026-03-10T13:46:38.973560+0000 mon.a (mon.0) 534 : audit [INF] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' 2026-03-10T13:46:39.339 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:46:39 vm08 bash[23387]: audit 2026-03-10T13:46:38.973560+0000 mon.a (mon.0) 534 : audit [INF] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' 2026-03-10T13:46:39.339 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:46:39 vm08 bash[23387]: audit 2026-03-10T13:46:38.976693+0000 mon.a (mon.0) 535 : audit [INF] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' 2026-03-10T13:46:39.339 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:46:39 vm08 bash[23387]: audit 2026-03-10T13:46:38.976693+0000 mon.a (mon.0) 535 : audit [INF] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' 2026-03-10T13:46:39.339 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:46:39 vm08 bash[23387]: audit 2026-03-10T13:46:38.982276+0000 mon.a (mon.0) 536 : audit [INF] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' 2026-03-10T13:46:39.339 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:46:39 vm08 bash[23387]: audit 2026-03-10T13:46:38.982276+0000 mon.a (mon.0) 536 : audit [INF] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' 2026-03-10T13:46:39.498 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:46:39 vm07 bash[23044]: audit 2026-03-10T13:46:38.060695+0000 mon.a (mon.0) 511 : audit [INF] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' 2026-03-10T13:46:39.498 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:46:39 vm07 bash[23044]: audit 2026-03-10T13:46:38.060695+0000 mon.a (mon.0) 511 : audit [INF] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' 2026-03-10T13:46:39.498 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:46:39 vm07 bash[23044]: audit 2026-03-10T13:46:38.065834+0000 mon.a (mon.0) 512 : audit [INF] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' 2026-03-10T13:46:39.498 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:46:39 vm07 bash[23044]: audit 2026-03-10T13:46:38.065834+0000 mon.a (mon.0) 512 : audit [INF] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' 2026-03-10T13:46:39.498 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:46:39 vm07 bash[23044]: audit 2026-03-10T13:46:38.080147+0000 mon.a (mon.0) 513 : audit [INF] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' 2026-03-10T13:46:39.498 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:46:39 vm07 bash[23044]: audit 2026-03-10T13:46:38.080147+0000 mon.a (mon.0) 513 : audit [INF] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' 2026-03-10T13:46:39.499 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:46:39 vm07 bash[23044]: audit 2026-03-10T13:46:38.084978+0000 mon.a (mon.0) 514 : audit [INF] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' 2026-03-10T13:46:39.499 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:46:39 vm07 bash[23044]: audit 2026-03-10T13:46:38.084978+0000 mon.a (mon.0) 514 : audit [INF] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' 2026-03-10T13:46:39.499 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:46:39 vm07 bash[23044]: audit 2026-03-10T13:46:38.227234+0000 mon.a (mon.0) 515 : audit [INF] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' 2026-03-10T13:46:39.499 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:46:39 vm07 bash[23044]: audit 2026-03-10T13:46:38.227234+0000 mon.a (mon.0) 515 : audit [INF] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' 2026-03-10T13:46:39.499 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:46:39 vm07 bash[23044]: audit 2026-03-10T13:46:38.231716+0000 mon.a (mon.0) 516 : audit [INF] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' 2026-03-10T13:46:39.499 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:46:39 vm07 bash[23044]: audit 2026-03-10T13:46:38.231716+0000 mon.a (mon.0) 516 : audit [INF] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' 2026-03-10T13:46:39.499 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:46:39 vm07 bash[23044]: audit 2026-03-10T13:46:38.617264+0000 mon.a (mon.0) 517 : audit [INF] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' 2026-03-10T13:46:39.499 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:46:39 vm07 bash[23044]: audit 2026-03-10T13:46:38.617264+0000 mon.a (mon.0) 517 : audit [INF] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' 2026-03-10T13:46:39.499 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:46:39 vm07 bash[23044]: audit 2026-03-10T13:46:38.621954+0000 mon.a (mon.0) 518 : audit [INF] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' 2026-03-10T13:46:39.499 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:46:39 vm07 bash[23044]: audit 2026-03-10T13:46:38.621954+0000 mon.a (mon.0) 518 : audit [INF] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' 2026-03-10T13:46:39.499 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:46:39 vm07 bash[23044]: audit 2026-03-10T13:46:38.622903+0000 mon.a (mon.0) 519 : audit [INF] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' cmd=[{"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"}]: dispatch 2026-03-10T13:46:39.499 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:46:39 vm07 bash[23044]: audit 2026-03-10T13:46:38.622903+0000 mon.a (mon.0) 519 : audit [INF] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' cmd=[{"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"}]: dispatch 2026-03-10T13:46:39.499 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:46:39 vm07 bash[23044]: cephadm 2026-03-10T13:46:38.623393+0000 mgr.a (mgr.14388) 9 : cephadm [INF] Adjusting osd_memory_target on vm08 to 2503M 2026-03-10T13:46:39.499 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:46:39 vm07 bash[23044]: cephadm 2026-03-10T13:46:38.623393+0000 mgr.a (mgr.14388) 9 : cephadm [INF] Adjusting osd_memory_target on vm08 to 2503M 2026-03-10T13:46:39.499 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:46:39 vm07 bash[23044]: audit 2026-03-10T13:46:38.626417+0000 mon.a (mon.0) 520 : audit [INF] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' 2026-03-10T13:46:39.499 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:46:39 vm07 bash[23044]: audit 2026-03-10T13:46:38.626417+0000 mon.a (mon.0) 520 : audit [INF] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' 2026-03-10T13:46:39.499 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:46:39 vm07 bash[23044]: audit 2026-03-10T13:46:38.643350+0000 mon.a (mon.0) 521 : audit [INF] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' 2026-03-10T13:46:39.499 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:46:39 vm07 bash[23044]: audit 2026-03-10T13:46:38.643350+0000 mon.a (mon.0) 521 : audit [INF] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' 2026-03-10T13:46:39.499 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:46:39 vm07 bash[23044]: audit 2026-03-10T13:46:38.646955+0000 mon.a (mon.0) 522 : audit [INF] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' 2026-03-10T13:46:39.499 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:46:39 vm07 bash[23044]: audit 2026-03-10T13:46:38.646955+0000 mon.a (mon.0) 522 : audit [INF] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' 2026-03-10T13:46:39.499 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:46:39 vm07 bash[23044]: audit 2026-03-10T13:46:38.647754+0000 mon.a (mon.0) 523 : audit [INF] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' cmd=[{"prefix": "config rm", "who": "osd/host:vm07", "name": "osd_memory_target"}]: dispatch 2026-03-10T13:46:39.499 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:46:39 vm07 bash[23044]: audit 2026-03-10T13:46:38.647754+0000 mon.a (mon.0) 523 : audit [INF] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' cmd=[{"prefix": "config rm", "who": "osd/host:vm07", "name": "osd_memory_target"}]: dispatch 2026-03-10T13:46:39.499 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:46:39 vm07 bash[23044]: audit 2026-03-10T13:46:38.793275+0000 mon.a (mon.0) 524 : audit [INF] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' 2026-03-10T13:46:39.499 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:46:39 vm07 bash[23044]: audit 2026-03-10T13:46:38.793275+0000 mon.a (mon.0) 524 : audit [INF] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' 2026-03-10T13:46:39.499 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:46:39 vm07 bash[23044]: audit 2026-03-10T13:46:38.797001+0000 mon.a (mon.0) 525 : audit [INF] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' 2026-03-10T13:46:39.499 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:46:39 vm07 bash[23044]: audit 2026-03-10T13:46:38.797001+0000 mon.a (mon.0) 525 : audit [INF] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' 2026-03-10T13:46:39.499 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:46:39 vm07 bash[23044]: audit 2026-03-10T13:46:38.797690+0000 mon.a (mon.0) 526 : audit [INF] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' cmd=[{"prefix": "config rm", "who": "osd/host:vm00", "name": "osd_memory_target"}]: dispatch 2026-03-10T13:46:39.499 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:46:39 vm07 bash[23044]: audit 2026-03-10T13:46:38.797690+0000 mon.a (mon.0) 526 : audit [INF] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' cmd=[{"prefix": "config rm", "who": "osd/host:vm00", "name": "osd_memory_target"}]: dispatch 2026-03-10T13:46:39.499 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:46:39 vm07 bash[23044]: audit 2026-03-10T13:46:38.798330+0000 mon.a (mon.0) 527 : audit [DBG] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T13:46:39.499 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:46:39 vm07 bash[23044]: audit 2026-03-10T13:46:38.798330+0000 mon.a (mon.0) 527 : audit [DBG] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T13:46:39.499 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:46:39 vm07 bash[23044]: audit 2026-03-10T13:46:38.798712+0000 mon.a (mon.0) 528 : audit [INF] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T13:46:39.499 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:46:39 vm07 bash[23044]: audit 2026-03-10T13:46:38.798712+0000 mon.a (mon.0) 528 : audit [INF] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T13:46:39.499 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:46:39 vm07 bash[23044]: audit 2026-03-10T13:46:38.954702+0000 mon.a (mon.0) 529 : audit [INF] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' 2026-03-10T13:46:39.499 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:46:39 vm07 bash[23044]: audit 2026-03-10T13:46:38.954702+0000 mon.a (mon.0) 529 : audit [INF] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' 2026-03-10T13:46:39.499 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:46:39 vm07 bash[23044]: audit 2026-03-10T13:46:38.959094+0000 mon.a (mon.0) 530 : audit [INF] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' 2026-03-10T13:46:39.499 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:46:39 vm07 bash[23044]: audit 2026-03-10T13:46:38.959094+0000 mon.a (mon.0) 530 : audit [INF] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' 2026-03-10T13:46:39.499 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:46:39 vm07 bash[23044]: audit 2026-03-10T13:46:38.962677+0000 mon.a (mon.0) 531 : audit [INF] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' 2026-03-10T13:46:39.499 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:46:39 vm07 bash[23044]: audit 2026-03-10T13:46:38.962677+0000 mon.a (mon.0) 531 : audit [INF] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' 2026-03-10T13:46:39.499 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:46:39 vm07 bash[23044]: audit 2026-03-10T13:46:38.966395+0000 mon.a (mon.0) 532 : audit [INF] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' 2026-03-10T13:46:39.499 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:46:39 vm07 bash[23044]: audit 2026-03-10T13:46:38.966395+0000 mon.a (mon.0) 532 : audit [INF] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' 2026-03-10T13:46:39.499 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:46:39 vm07 bash[23044]: audit 2026-03-10T13:46:38.970158+0000 mon.a (mon.0) 533 : audit [INF] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' 2026-03-10T13:46:39.499 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:46:39 vm07 bash[23044]: audit 2026-03-10T13:46:38.970158+0000 mon.a (mon.0) 533 : audit [INF] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' 2026-03-10T13:46:39.500 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:46:39 vm07 bash[23044]: audit 2026-03-10T13:46:38.973560+0000 mon.a (mon.0) 534 : audit [INF] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' 2026-03-10T13:46:39.500 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:46:39 vm07 bash[23044]: audit 2026-03-10T13:46:38.973560+0000 mon.a (mon.0) 534 : audit [INF] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' 2026-03-10T13:46:39.500 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:46:39 vm07 bash[23044]: audit 2026-03-10T13:46:38.976693+0000 mon.a (mon.0) 535 : audit [INF] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' 2026-03-10T13:46:39.500 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:46:39 vm07 bash[23044]: audit 2026-03-10T13:46:38.976693+0000 mon.a (mon.0) 535 : audit [INF] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' 2026-03-10T13:46:39.500 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:46:39 vm07 bash[23044]: audit 2026-03-10T13:46:38.982276+0000 mon.a (mon.0) 536 : audit [INF] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' 2026-03-10T13:46:39.500 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:46:39 vm07 bash[23044]: audit 2026-03-10T13:46:38.982276+0000 mon.a (mon.0) 536 : audit [INF] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' 2026-03-10T13:46:40.467 INFO:journalctl@ceph.mgr.a.vm00.stdout:Mar 10 13:46:40 vm00 bash[21015]: [10/Mar/2026:13:46:40] ENGINE Bus STOPPING 2026-03-10T13:46:40.865 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:46:40 vm08 bash[23387]: cluster 2026-03-10T13:46:38.650673+0000 mgr.a (mgr.14388) 10 : cluster [DBG] pgmap v6: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:46:40.865 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:46:40 vm08 bash[23387]: cluster 2026-03-10T13:46:38.650673+0000 mgr.a (mgr.14388) 10 : cluster [DBG] pgmap v6: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:46:40.865 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:46:40 vm08 bash[23387]: cephadm 2026-03-10T13:46:38.799288+0000 mgr.a (mgr.14388) 11 : cephadm [INF] Updating vm00:/etc/ceph/ceph.conf 2026-03-10T13:46:40.865 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:46:40 vm08 bash[23387]: cephadm 2026-03-10T13:46:38.799288+0000 mgr.a (mgr.14388) 11 : cephadm [INF] Updating vm00:/etc/ceph/ceph.conf 2026-03-10T13:46:40.865 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:46:40 vm08 bash[23387]: cephadm 2026-03-10T13:46:38.799417+0000 mgr.a (mgr.14388) 12 : cephadm [INF] Updating vm07:/etc/ceph/ceph.conf 2026-03-10T13:46:40.865 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:46:40 vm08 bash[23387]: cephadm 2026-03-10T13:46:38.799417+0000 mgr.a (mgr.14388) 12 : cephadm [INF] Updating vm07:/etc/ceph/ceph.conf 2026-03-10T13:46:40.866 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:46:40 vm08 bash[23387]: cephadm 2026-03-10T13:46:38.799510+0000 mgr.a (mgr.14388) 13 : cephadm [INF] Updating vm08:/etc/ceph/ceph.conf 2026-03-10T13:46:40.866 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:46:40 vm08 bash[23387]: cephadm 2026-03-10T13:46:38.799510+0000 mgr.a (mgr.14388) 13 : cephadm [INF] Updating vm08:/etc/ceph/ceph.conf 2026-03-10T13:46:40.866 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:46:40 vm08 bash[23387]: cephadm 2026-03-10T13:46:38.837665+0000 mgr.a (mgr.14388) 14 : cephadm [INF] Updating vm00:/var/lib/ceph/c9620084-1c86-11f1-bcc5-e3fb709eab0a/config/ceph.conf 2026-03-10T13:46:40.866 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:46:40 vm08 bash[23387]: cephadm 2026-03-10T13:46:38.837665+0000 mgr.a (mgr.14388) 14 : cephadm [INF] Updating vm00:/var/lib/ceph/c9620084-1c86-11f1-bcc5-e3fb709eab0a/config/ceph.conf 2026-03-10T13:46:40.866 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:46:40 vm08 bash[23387]: cephadm 2026-03-10T13:46:38.839604+0000 mgr.a (mgr.14388) 15 : cephadm [INF] Updating vm08:/var/lib/ceph/c9620084-1c86-11f1-bcc5-e3fb709eab0a/config/ceph.conf 2026-03-10T13:46:40.866 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:46:40 vm08 bash[23387]: cephadm 2026-03-10T13:46:38.839604+0000 mgr.a (mgr.14388) 15 : cephadm [INF] Updating vm08:/var/lib/ceph/c9620084-1c86-11f1-bcc5-e3fb709eab0a/config/ceph.conf 2026-03-10T13:46:40.866 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:46:40 vm08 bash[23387]: cephadm 2026-03-10T13:46:38.839821+0000 mgr.a (mgr.14388) 16 : cephadm [INF] Updating vm07:/var/lib/ceph/c9620084-1c86-11f1-bcc5-e3fb709eab0a/config/ceph.conf 2026-03-10T13:46:40.866 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:46:40 vm08 bash[23387]: cephadm 2026-03-10T13:46:38.839821+0000 mgr.a (mgr.14388) 16 : cephadm [INF] Updating vm07:/var/lib/ceph/c9620084-1c86-11f1-bcc5-e3fb709eab0a/config/ceph.conf 2026-03-10T13:46:40.866 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:46:40 vm08 bash[23387]: cephadm 2026-03-10T13:46:38.874740+0000 mgr.a (mgr.14388) 17 : cephadm [INF] Updating vm00:/etc/ceph/ceph.client.admin.keyring 2026-03-10T13:46:40.866 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:46:40 vm08 bash[23387]: cephadm 2026-03-10T13:46:38.874740+0000 mgr.a (mgr.14388) 17 : cephadm [INF] Updating vm00:/etc/ceph/ceph.client.admin.keyring 2026-03-10T13:46:40.866 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:46:40 vm08 bash[23387]: cephadm 2026-03-10T13:46:38.874856+0000 mgr.a (mgr.14388) 18 : cephadm [INF] Updating vm08:/etc/ceph/ceph.client.admin.keyring 2026-03-10T13:46:40.866 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:46:40 vm08 bash[23387]: cephadm 2026-03-10T13:46:38.874856+0000 mgr.a (mgr.14388) 18 : cephadm [INF] Updating vm08:/etc/ceph/ceph.client.admin.keyring 2026-03-10T13:46:40.866 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:46:40 vm08 bash[23387]: cephadm 2026-03-10T13:46:38.876789+0000 mgr.a (mgr.14388) 19 : cephadm [INF] Updating vm07:/etc/ceph/ceph.client.admin.keyring 2026-03-10T13:46:40.866 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:46:40 vm08 bash[23387]: cephadm 2026-03-10T13:46:38.876789+0000 mgr.a (mgr.14388) 19 : cephadm [INF] Updating vm07:/etc/ceph/ceph.client.admin.keyring 2026-03-10T13:46:40.866 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:46:40 vm08 bash[23387]: cephadm 2026-03-10T13:46:38.911666+0000 mgr.a (mgr.14388) 20 : cephadm [INF] Updating vm00:/var/lib/ceph/c9620084-1c86-11f1-bcc5-e3fb709eab0a/config/ceph.client.admin.keyring 2026-03-10T13:46:40.866 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:46:40 vm08 bash[23387]: cephadm 2026-03-10T13:46:38.911666+0000 mgr.a (mgr.14388) 20 : cephadm [INF] Updating vm00:/var/lib/ceph/c9620084-1c86-11f1-bcc5-e3fb709eab0a/config/ceph.client.admin.keyring 2026-03-10T13:46:40.866 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:46:40 vm08 bash[23387]: cephadm 2026-03-10T13:46:38.913263+0000 mgr.a (mgr.14388) 21 : cephadm [INF] Updating vm08:/var/lib/ceph/c9620084-1c86-11f1-bcc5-e3fb709eab0a/config/ceph.client.admin.keyring 2026-03-10T13:46:40.866 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:46:40 vm08 bash[23387]: cephadm 2026-03-10T13:46:38.913263+0000 mgr.a (mgr.14388) 21 : cephadm [INF] Updating vm08:/var/lib/ceph/c9620084-1c86-11f1-bcc5-e3fb709eab0a/config/ceph.client.admin.keyring 2026-03-10T13:46:40.866 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:46:40 vm08 bash[23387]: cephadm 2026-03-10T13:46:38.914505+0000 mgr.a (mgr.14388) 22 : cephadm [INF] Updating vm07:/var/lib/ceph/c9620084-1c86-11f1-bcc5-e3fb709eab0a/config/ceph.client.admin.keyring 2026-03-10T13:46:40.866 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:46:40 vm08 bash[23387]: cephadm 2026-03-10T13:46:38.914505+0000 mgr.a (mgr.14388) 22 : cephadm [INF] Updating vm07:/var/lib/ceph/c9620084-1c86-11f1-bcc5-e3fb709eab0a/config/ceph.client.admin.keyring 2026-03-10T13:46:40.866 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:46:40 vm08 bash[23387]: cephadm 2026-03-10T13:46:38.997352+0000 mgr.a (mgr.14388) 23 : cephadm [INF] Reconfiguring grafana.vm00 (dependencies changed)... 2026-03-10T13:46:40.866 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:46:40 vm08 bash[23387]: cephadm 2026-03-10T13:46:38.997352+0000 mgr.a (mgr.14388) 23 : cephadm [INF] Reconfiguring grafana.vm00 (dependencies changed)... 2026-03-10T13:46:40.866 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:46:40 vm08 bash[23387]: cephadm 2026-03-10T13:46:39.030617+0000 mgr.a (mgr.14388) 24 : cephadm [INF] Reconfiguring daemon grafana.vm00 on vm00 2026-03-10T13:46:40.866 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:46:40 vm08 bash[23387]: cephadm 2026-03-10T13:46:39.030617+0000 mgr.a (mgr.14388) 24 : cephadm [INF] Reconfiguring daemon grafana.vm00 on vm00 2026-03-10T13:46:40.866 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:46:40 vm08 bash[23387]: audit 2026-03-10T13:46:39.610226+0000 mon.a (mon.0) 537 : audit [INF] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' 2026-03-10T13:46:40.866 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:46:40 vm08 bash[23387]: audit 2026-03-10T13:46:39.610226+0000 mon.a (mon.0) 537 : audit [INF] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' 2026-03-10T13:46:40.866 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:46:40 vm08 bash[23387]: audit 2026-03-10T13:46:39.615018+0000 mon.a (mon.0) 538 : audit [INF] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' 2026-03-10T13:46:40.866 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:46:40 vm08 bash[23387]: audit 2026-03-10T13:46:39.615018+0000 mon.a (mon.0) 538 : audit [INF] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' 2026-03-10T13:46:40.866 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:46:40 vm08 bash[23387]: cephadm 2026-03-10T13:46:39.615881+0000 mgr.a (mgr.14388) 25 : cephadm [INF] Reconfiguring alertmanager.vm08 (dependencies changed)... 2026-03-10T13:46:40.866 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:46:40 vm08 bash[23387]: cephadm 2026-03-10T13:46:39.615881+0000 mgr.a (mgr.14388) 25 : cephadm [INF] Reconfiguring alertmanager.vm08 (dependencies changed)... 2026-03-10T13:46:40.866 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:46:40 vm08 bash[23387]: cephadm 2026-03-10T13:46:39.620029+0000 mgr.a (mgr.14388) 26 : cephadm [INF] Reconfiguring daemon alertmanager.vm08 on vm08 2026-03-10T13:46:40.866 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:46:40 vm08 bash[23387]: cephadm 2026-03-10T13:46:39.620029+0000 mgr.a (mgr.14388) 26 : cephadm [INF] Reconfiguring daemon alertmanager.vm08 on vm08 2026-03-10T13:46:40.866 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:46:40 vm08 bash[23387]: audit 2026-03-10T13:46:40.204024+0000 mon.a (mon.0) 539 : audit [INF] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' 2026-03-10T13:46:40.866 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:46:40 vm08 bash[23387]: audit 2026-03-10T13:46:40.204024+0000 mon.a (mon.0) 539 : audit [INF] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' 2026-03-10T13:46:40.866 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:46:40 vm08 bash[23387]: audit 2026-03-10T13:46:40.210065+0000 mon.a (mon.0) 540 : audit [INF] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' 2026-03-10T13:46:40.866 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:46:40 vm08 bash[23387]: audit 2026-03-10T13:46:40.210065+0000 mon.a (mon.0) 540 : audit [INF] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' 2026-03-10T13:46:40.866 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:46:40 vm08 bash[23387]: audit 2026-03-10T13:46:40.212827+0000 mon.a (mon.0) 541 : audit [DBG] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' cmd=[{"prefix": "dashboard get-grafana-api-url"}]: dispatch 2026-03-10T13:46:40.866 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:46:40 vm08 bash[23387]: audit 2026-03-10T13:46:40.212827+0000 mon.a (mon.0) 541 : audit [DBG] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' cmd=[{"prefix": "dashboard get-grafana-api-url"}]: dispatch 2026-03-10T13:46:40.866 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:46:40 vm08 bash[23387]: audit 2026-03-10T13:46:40.213874+0000 mon.a (mon.0) 542 : audit [INF] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' cmd=[{"prefix": "dashboard set-grafana-api-url", "value": "https://vm00.local:3000"}]: dispatch 2026-03-10T13:46:40.866 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:46:40 vm08 bash[23387]: audit 2026-03-10T13:46:40.213874+0000 mon.a (mon.0) 542 : audit [INF] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' cmd=[{"prefix": "dashboard set-grafana-api-url", "value": "https://vm00.local:3000"}]: dispatch 2026-03-10T13:46:40.866 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:46:40 vm08 bash[23387]: audit 2026-03-10T13:46:40.219537+0000 mon.a (mon.0) 543 : audit [INF] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' 2026-03-10T13:46:40.866 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:46:40 vm08 bash[23387]: audit 2026-03-10T13:46:40.219537+0000 mon.a (mon.0) 543 : audit [INF] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' 2026-03-10T13:46:40.866 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:46:40 vm08 bash[23387]: audit 2026-03-10T13:46:40.227675+0000 mon.a (mon.0) 544 : audit [DBG] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' cmd=[{"prefix": "dashboard get-alertmanager-api-host"}]: dispatch 2026-03-10T13:46:40.866 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:46:40 vm08 bash[23387]: audit 2026-03-10T13:46:40.227675+0000 mon.a (mon.0) 544 : audit [DBG] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' cmd=[{"prefix": "dashboard get-alertmanager-api-host"}]: dispatch 2026-03-10T13:46:40.866 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:46:40 vm08 bash[23387]: audit 2026-03-10T13:46:40.228219+0000 mon.a (mon.0) 545 : audit [INF] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' cmd=[{"prefix": "dashboard set-alertmanager-api-host", "value": "http://vm08.local:9093"}]: dispatch 2026-03-10T13:46:40.866 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:46:40 vm08 bash[23387]: audit 2026-03-10T13:46:40.228219+0000 mon.a (mon.0) 545 : audit [INF] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' cmd=[{"prefix": "dashboard set-alertmanager-api-host", "value": "http://vm08.local:9093"}]: dispatch 2026-03-10T13:46:40.866 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:46:40 vm08 bash[23387]: audit 2026-03-10T13:46:40.231617+0000 mon.a (mon.0) 546 : audit [INF] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' 2026-03-10T13:46:40.866 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:46:40 vm08 bash[23387]: audit 2026-03-10T13:46:40.231617+0000 mon.a (mon.0) 546 : audit [INF] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' 2026-03-10T13:46:40.866 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:46:40 vm08 bash[23387]: audit 2026-03-10T13:46:40.265174+0000 mon.a (mon.0) 547 : audit [DBG] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T13:46:40.866 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:46:40 vm08 bash[23387]: audit 2026-03-10T13:46:40.265174+0000 mon.a (mon.0) 547 : audit [DBG] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T13:46:40.967 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:46:40 vm00 bash[20748]: cluster 2026-03-10T13:46:38.650673+0000 mgr.a (mgr.14388) 10 : cluster [DBG] pgmap v6: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:46:40.967 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:46:40 vm00 bash[20748]: cluster 2026-03-10T13:46:38.650673+0000 mgr.a (mgr.14388) 10 : cluster [DBG] pgmap v6: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:46:40.967 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:46:40 vm00 bash[20748]: cephadm 2026-03-10T13:46:38.799288+0000 mgr.a (mgr.14388) 11 : cephadm [INF] Updating vm00:/etc/ceph/ceph.conf 2026-03-10T13:46:40.967 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:46:40 vm00 bash[20748]: cephadm 2026-03-10T13:46:38.799288+0000 mgr.a (mgr.14388) 11 : cephadm [INF] Updating vm00:/etc/ceph/ceph.conf 2026-03-10T13:46:40.967 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:46:40 vm00 bash[20748]: cephadm 2026-03-10T13:46:38.799417+0000 mgr.a (mgr.14388) 12 : cephadm [INF] Updating vm07:/etc/ceph/ceph.conf 2026-03-10T13:46:40.967 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:46:40 vm00 bash[20748]: cephadm 2026-03-10T13:46:38.799417+0000 mgr.a (mgr.14388) 12 : cephadm [INF] Updating vm07:/etc/ceph/ceph.conf 2026-03-10T13:46:40.967 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:46:40 vm00 bash[20748]: cephadm 2026-03-10T13:46:38.799510+0000 mgr.a (mgr.14388) 13 : cephadm [INF] Updating vm08:/etc/ceph/ceph.conf 2026-03-10T13:46:40.967 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:46:40 vm00 bash[20748]: cephadm 2026-03-10T13:46:38.799510+0000 mgr.a (mgr.14388) 13 : cephadm [INF] Updating vm08:/etc/ceph/ceph.conf 2026-03-10T13:46:40.967 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:46:40 vm00 bash[20748]: cephadm 2026-03-10T13:46:38.837665+0000 mgr.a (mgr.14388) 14 : cephadm [INF] Updating vm00:/var/lib/ceph/c9620084-1c86-11f1-bcc5-e3fb709eab0a/config/ceph.conf 2026-03-10T13:46:40.967 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:46:40 vm00 bash[20748]: cephadm 2026-03-10T13:46:38.837665+0000 mgr.a (mgr.14388) 14 : cephadm [INF] Updating vm00:/var/lib/ceph/c9620084-1c86-11f1-bcc5-e3fb709eab0a/config/ceph.conf 2026-03-10T13:46:40.967 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:46:40 vm00 bash[20748]: cephadm 2026-03-10T13:46:38.839604+0000 mgr.a (mgr.14388) 15 : cephadm [INF] Updating vm08:/var/lib/ceph/c9620084-1c86-11f1-bcc5-e3fb709eab0a/config/ceph.conf 2026-03-10T13:46:40.967 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:46:40 vm00 bash[20748]: cephadm 2026-03-10T13:46:38.839604+0000 mgr.a (mgr.14388) 15 : cephadm [INF] Updating vm08:/var/lib/ceph/c9620084-1c86-11f1-bcc5-e3fb709eab0a/config/ceph.conf 2026-03-10T13:46:40.967 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:46:40 vm00 bash[20748]: cephadm 2026-03-10T13:46:38.839821+0000 mgr.a (mgr.14388) 16 : cephadm [INF] Updating vm07:/var/lib/ceph/c9620084-1c86-11f1-bcc5-e3fb709eab0a/config/ceph.conf 2026-03-10T13:46:40.967 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:46:40 vm00 bash[20748]: cephadm 2026-03-10T13:46:38.839821+0000 mgr.a (mgr.14388) 16 : cephadm [INF] Updating vm07:/var/lib/ceph/c9620084-1c86-11f1-bcc5-e3fb709eab0a/config/ceph.conf 2026-03-10T13:46:40.967 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:46:40 vm00 bash[20748]: cephadm 2026-03-10T13:46:38.874740+0000 mgr.a (mgr.14388) 17 : cephadm [INF] Updating vm00:/etc/ceph/ceph.client.admin.keyring 2026-03-10T13:46:40.967 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:46:40 vm00 bash[20748]: cephadm 2026-03-10T13:46:38.874740+0000 mgr.a (mgr.14388) 17 : cephadm [INF] Updating vm00:/etc/ceph/ceph.client.admin.keyring 2026-03-10T13:46:40.967 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:46:40 vm00 bash[20748]: cephadm 2026-03-10T13:46:38.874856+0000 mgr.a (mgr.14388) 18 : cephadm [INF] Updating vm08:/etc/ceph/ceph.client.admin.keyring 2026-03-10T13:46:40.968 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:46:40 vm00 bash[20748]: cephadm 2026-03-10T13:46:38.874856+0000 mgr.a (mgr.14388) 18 : cephadm [INF] Updating vm08:/etc/ceph/ceph.client.admin.keyring 2026-03-10T13:46:40.968 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:46:40 vm00 bash[20748]: cephadm 2026-03-10T13:46:38.876789+0000 mgr.a (mgr.14388) 19 : cephadm [INF] Updating vm07:/etc/ceph/ceph.client.admin.keyring 2026-03-10T13:46:40.968 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:46:40 vm00 bash[20748]: cephadm 2026-03-10T13:46:38.876789+0000 mgr.a (mgr.14388) 19 : cephadm [INF] Updating vm07:/etc/ceph/ceph.client.admin.keyring 2026-03-10T13:46:40.968 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:46:40 vm00 bash[20748]: cephadm 2026-03-10T13:46:38.911666+0000 mgr.a (mgr.14388) 20 : cephadm [INF] Updating vm00:/var/lib/ceph/c9620084-1c86-11f1-bcc5-e3fb709eab0a/config/ceph.client.admin.keyring 2026-03-10T13:46:40.968 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:46:40 vm00 bash[20748]: cephadm 2026-03-10T13:46:38.911666+0000 mgr.a (mgr.14388) 20 : cephadm [INF] Updating vm00:/var/lib/ceph/c9620084-1c86-11f1-bcc5-e3fb709eab0a/config/ceph.client.admin.keyring 2026-03-10T13:46:40.968 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:46:40 vm00 bash[20748]: cephadm 2026-03-10T13:46:38.913263+0000 mgr.a (mgr.14388) 21 : cephadm [INF] Updating vm08:/var/lib/ceph/c9620084-1c86-11f1-bcc5-e3fb709eab0a/config/ceph.client.admin.keyring 2026-03-10T13:46:40.968 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:46:40 vm00 bash[20748]: cephadm 2026-03-10T13:46:38.913263+0000 mgr.a (mgr.14388) 21 : cephadm [INF] Updating vm08:/var/lib/ceph/c9620084-1c86-11f1-bcc5-e3fb709eab0a/config/ceph.client.admin.keyring 2026-03-10T13:46:40.968 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:46:40 vm00 bash[20748]: cephadm 2026-03-10T13:46:38.914505+0000 mgr.a (mgr.14388) 22 : cephadm [INF] Updating vm07:/var/lib/ceph/c9620084-1c86-11f1-bcc5-e3fb709eab0a/config/ceph.client.admin.keyring 2026-03-10T13:46:40.968 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:46:40 vm00 bash[20748]: cephadm 2026-03-10T13:46:38.914505+0000 mgr.a (mgr.14388) 22 : cephadm [INF] Updating vm07:/var/lib/ceph/c9620084-1c86-11f1-bcc5-e3fb709eab0a/config/ceph.client.admin.keyring 2026-03-10T13:46:40.968 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:46:40 vm00 bash[20748]: cephadm 2026-03-10T13:46:38.997352+0000 mgr.a (mgr.14388) 23 : cephadm [INF] Reconfiguring grafana.vm00 (dependencies changed)... 2026-03-10T13:46:40.968 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:46:40 vm00 bash[20748]: cephadm 2026-03-10T13:46:38.997352+0000 mgr.a (mgr.14388) 23 : cephadm [INF] Reconfiguring grafana.vm00 (dependencies changed)... 2026-03-10T13:46:40.968 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:46:40 vm00 bash[20748]: cephadm 2026-03-10T13:46:39.030617+0000 mgr.a (mgr.14388) 24 : cephadm [INF] Reconfiguring daemon grafana.vm00 on vm00 2026-03-10T13:46:40.968 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:46:40 vm00 bash[20748]: cephadm 2026-03-10T13:46:39.030617+0000 mgr.a (mgr.14388) 24 : cephadm [INF] Reconfiguring daemon grafana.vm00 on vm00 2026-03-10T13:46:40.968 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:46:40 vm00 bash[20748]: audit 2026-03-10T13:46:39.610226+0000 mon.a (mon.0) 537 : audit [INF] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' 2026-03-10T13:46:40.968 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:46:40 vm00 bash[20748]: audit 2026-03-10T13:46:39.610226+0000 mon.a (mon.0) 537 : audit [INF] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' 2026-03-10T13:46:40.968 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:46:40 vm00 bash[20748]: audit 2026-03-10T13:46:39.615018+0000 mon.a (mon.0) 538 : audit [INF] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' 2026-03-10T13:46:40.968 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:46:40 vm00 bash[20748]: audit 2026-03-10T13:46:39.615018+0000 mon.a (mon.0) 538 : audit [INF] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' 2026-03-10T13:46:40.968 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:46:40 vm00 bash[20748]: cephadm 2026-03-10T13:46:39.615881+0000 mgr.a (mgr.14388) 25 : cephadm [INF] Reconfiguring alertmanager.vm08 (dependencies changed)... 2026-03-10T13:46:40.968 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:46:40 vm00 bash[20748]: cephadm 2026-03-10T13:46:39.615881+0000 mgr.a (mgr.14388) 25 : cephadm [INF] Reconfiguring alertmanager.vm08 (dependencies changed)... 2026-03-10T13:46:40.968 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:46:40 vm00 bash[20748]: cephadm 2026-03-10T13:46:39.620029+0000 mgr.a (mgr.14388) 26 : cephadm [INF] Reconfiguring daemon alertmanager.vm08 on vm08 2026-03-10T13:46:40.968 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:46:40 vm00 bash[20748]: cephadm 2026-03-10T13:46:39.620029+0000 mgr.a (mgr.14388) 26 : cephadm [INF] Reconfiguring daemon alertmanager.vm08 on vm08 2026-03-10T13:46:40.968 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:46:40 vm00 bash[20748]: audit 2026-03-10T13:46:40.204024+0000 mon.a (mon.0) 539 : audit [INF] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' 2026-03-10T13:46:40.968 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:46:40 vm00 bash[20748]: audit 2026-03-10T13:46:40.204024+0000 mon.a (mon.0) 539 : audit [INF] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' 2026-03-10T13:46:40.968 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:46:40 vm00 bash[20748]: audit 2026-03-10T13:46:40.210065+0000 mon.a (mon.0) 540 : audit [INF] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' 2026-03-10T13:46:40.968 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:46:40 vm00 bash[20748]: audit 2026-03-10T13:46:40.210065+0000 mon.a (mon.0) 540 : audit [INF] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' 2026-03-10T13:46:40.968 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:46:40 vm00 bash[20748]: audit 2026-03-10T13:46:40.212827+0000 mon.a (mon.0) 541 : audit [DBG] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' cmd=[{"prefix": "dashboard get-grafana-api-url"}]: dispatch 2026-03-10T13:46:40.968 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:46:40 vm00 bash[20748]: audit 2026-03-10T13:46:40.212827+0000 mon.a (mon.0) 541 : audit [DBG] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' cmd=[{"prefix": "dashboard get-grafana-api-url"}]: dispatch 2026-03-10T13:46:40.968 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:46:40 vm00 bash[20748]: audit 2026-03-10T13:46:40.213874+0000 mon.a (mon.0) 542 : audit [INF] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' cmd=[{"prefix": "dashboard set-grafana-api-url", "value": "https://vm00.local:3000"}]: dispatch 2026-03-10T13:46:40.968 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:46:40 vm00 bash[20748]: audit 2026-03-10T13:46:40.213874+0000 mon.a (mon.0) 542 : audit [INF] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' cmd=[{"prefix": "dashboard set-grafana-api-url", "value": "https://vm00.local:3000"}]: dispatch 2026-03-10T13:46:40.968 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:46:40 vm00 bash[20748]: audit 2026-03-10T13:46:40.219537+0000 mon.a (mon.0) 543 : audit [INF] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' 2026-03-10T13:46:40.968 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:46:40 vm00 bash[20748]: audit 2026-03-10T13:46:40.219537+0000 mon.a (mon.0) 543 : audit [INF] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' 2026-03-10T13:46:40.968 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:46:40 vm00 bash[20748]: audit 2026-03-10T13:46:40.227675+0000 mon.a (mon.0) 544 : audit [DBG] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' cmd=[{"prefix": "dashboard get-alertmanager-api-host"}]: dispatch 2026-03-10T13:46:40.968 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:46:40 vm00 bash[20748]: audit 2026-03-10T13:46:40.227675+0000 mon.a (mon.0) 544 : audit [DBG] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' cmd=[{"prefix": "dashboard get-alertmanager-api-host"}]: dispatch 2026-03-10T13:46:40.968 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:46:40 vm00 bash[20748]: audit 2026-03-10T13:46:40.228219+0000 mon.a (mon.0) 545 : audit [INF] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' cmd=[{"prefix": "dashboard set-alertmanager-api-host", "value": "http://vm08.local:9093"}]: dispatch 2026-03-10T13:46:40.968 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:46:40 vm00 bash[20748]: audit 2026-03-10T13:46:40.228219+0000 mon.a (mon.0) 545 : audit [INF] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' cmd=[{"prefix": "dashboard set-alertmanager-api-host", "value": "http://vm08.local:9093"}]: dispatch 2026-03-10T13:46:40.968 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:46:40 vm00 bash[20748]: audit 2026-03-10T13:46:40.231617+0000 mon.a (mon.0) 546 : audit [INF] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' 2026-03-10T13:46:40.968 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:46:40 vm00 bash[20748]: audit 2026-03-10T13:46:40.231617+0000 mon.a (mon.0) 546 : audit [INF] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' 2026-03-10T13:46:40.968 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:46:40 vm00 bash[20748]: audit 2026-03-10T13:46:40.265174+0000 mon.a (mon.0) 547 : audit [DBG] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T13:46:40.968 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:46:40 vm00 bash[20748]: audit 2026-03-10T13:46:40.265174+0000 mon.a (mon.0) 547 : audit [DBG] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T13:46:40.968 INFO:journalctl@ceph.mgr.a.vm00.stdout:Mar 10 13:46:40 vm00 bash[21015]: [10/Mar/2026:13:46:40] ENGINE HTTP Server cherrypy._cpwsgi_server.CPWSGIServer(('::', 9283)) shut down 2026-03-10T13:46:40.968 INFO:journalctl@ceph.mgr.a.vm00.stdout:Mar 10 13:46:40 vm00 bash[21015]: [10/Mar/2026:13:46:40] ENGINE Bus STOPPED 2026-03-10T13:46:40.968 INFO:journalctl@ceph.mgr.a.vm00.stdout:Mar 10 13:46:40 vm00 bash[21015]: [10/Mar/2026:13:46:40] ENGINE Bus STARTING 2026-03-10T13:46:40.968 INFO:journalctl@ceph.mgr.a.vm00.stdout:Mar 10 13:46:40 vm00 bash[21015]: [10/Mar/2026:13:46:40] ENGINE Serving on http://:::9283 2026-03-10T13:46:40.968 INFO:journalctl@ceph.mgr.a.vm00.stdout:Mar 10 13:46:40 vm00 bash[21015]: [10/Mar/2026:13:46:40] ENGINE Bus STARTED 2026-03-10T13:46:40.968 INFO:journalctl@ceph.mgr.a.vm00.stdout:Mar 10 13:46:40 vm00 bash[21015]: [10/Mar/2026:13:46:40] ENGINE Bus STOPPING 2026-03-10T13:46:40.968 INFO:journalctl@ceph.mgr.a.vm00.stdout:Mar 10 13:46:40 vm00 bash[21015]: [10/Mar/2026:13:46:40] ENGINE HTTP Server cherrypy._cpwsgi_server.CPWSGIServer(('::', 9283)) shut down 2026-03-10T13:46:40.968 INFO:journalctl@ceph.mgr.a.vm00.stdout:Mar 10 13:46:40 vm00 bash[21015]: [10/Mar/2026:13:46:40] ENGINE Bus STOPPED 2026-03-10T13:46:40.968 INFO:journalctl@ceph.mgr.a.vm00.stdout:Mar 10 13:46:40 vm00 bash[21015]: [10/Mar/2026:13:46:40] ENGINE Bus STARTING 2026-03-10T13:46:40.968 INFO:journalctl@ceph.mgr.a.vm00.stdout:Mar 10 13:46:40 vm00 bash[21015]: [10/Mar/2026:13:46:40] ENGINE Serving on http://:::9283 2026-03-10T13:46:40.968 INFO:journalctl@ceph.mgr.a.vm00.stdout:Mar 10 13:46:40 vm00 bash[21015]: [10/Mar/2026:13:46:40] ENGINE Bus STARTED 2026-03-10T13:46:40.999 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:46:40 vm07 bash[23044]: cluster 2026-03-10T13:46:38.650673+0000 mgr.a (mgr.14388) 10 : cluster [DBG] pgmap v6: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:46:40.999 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:46:40 vm07 bash[23044]: cluster 2026-03-10T13:46:38.650673+0000 mgr.a (mgr.14388) 10 : cluster [DBG] pgmap v6: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:46:40.999 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:46:40 vm07 bash[23044]: cephadm 2026-03-10T13:46:38.799288+0000 mgr.a (mgr.14388) 11 : cephadm [INF] Updating vm00:/etc/ceph/ceph.conf 2026-03-10T13:46:40.999 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:46:40 vm07 bash[23044]: cephadm 2026-03-10T13:46:38.799288+0000 mgr.a (mgr.14388) 11 : cephadm [INF] Updating vm00:/etc/ceph/ceph.conf 2026-03-10T13:46:40.999 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:46:40 vm07 bash[23044]: cephadm 2026-03-10T13:46:38.799417+0000 mgr.a (mgr.14388) 12 : cephadm [INF] Updating vm07:/etc/ceph/ceph.conf 2026-03-10T13:46:40.999 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:46:40 vm07 bash[23044]: cephadm 2026-03-10T13:46:38.799417+0000 mgr.a (mgr.14388) 12 : cephadm [INF] Updating vm07:/etc/ceph/ceph.conf 2026-03-10T13:46:40.999 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:46:40 vm07 bash[23044]: cephadm 2026-03-10T13:46:38.799510+0000 mgr.a (mgr.14388) 13 : cephadm [INF] Updating vm08:/etc/ceph/ceph.conf 2026-03-10T13:46:40.999 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:46:40 vm07 bash[23044]: cephadm 2026-03-10T13:46:38.799510+0000 mgr.a (mgr.14388) 13 : cephadm [INF] Updating vm08:/etc/ceph/ceph.conf 2026-03-10T13:46:40.999 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:46:40 vm07 bash[23044]: cephadm 2026-03-10T13:46:38.837665+0000 mgr.a (mgr.14388) 14 : cephadm [INF] Updating vm00:/var/lib/ceph/c9620084-1c86-11f1-bcc5-e3fb709eab0a/config/ceph.conf 2026-03-10T13:46:40.999 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:46:40 vm07 bash[23044]: cephadm 2026-03-10T13:46:38.837665+0000 mgr.a (mgr.14388) 14 : cephadm [INF] Updating vm00:/var/lib/ceph/c9620084-1c86-11f1-bcc5-e3fb709eab0a/config/ceph.conf 2026-03-10T13:46:40.999 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:46:40 vm07 bash[23044]: cephadm 2026-03-10T13:46:38.839604+0000 mgr.a (mgr.14388) 15 : cephadm [INF] Updating vm08:/var/lib/ceph/c9620084-1c86-11f1-bcc5-e3fb709eab0a/config/ceph.conf 2026-03-10T13:46:40.999 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:46:40 vm07 bash[23044]: cephadm 2026-03-10T13:46:38.839604+0000 mgr.a (mgr.14388) 15 : cephadm [INF] Updating vm08:/var/lib/ceph/c9620084-1c86-11f1-bcc5-e3fb709eab0a/config/ceph.conf 2026-03-10T13:46:40.999 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:46:40 vm07 bash[23044]: cephadm 2026-03-10T13:46:38.839821+0000 mgr.a (mgr.14388) 16 : cephadm [INF] Updating vm07:/var/lib/ceph/c9620084-1c86-11f1-bcc5-e3fb709eab0a/config/ceph.conf 2026-03-10T13:46:40.999 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:46:40 vm07 bash[23044]: cephadm 2026-03-10T13:46:38.839821+0000 mgr.a (mgr.14388) 16 : cephadm [INF] Updating vm07:/var/lib/ceph/c9620084-1c86-11f1-bcc5-e3fb709eab0a/config/ceph.conf 2026-03-10T13:46:40.999 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:46:40 vm07 bash[23044]: cephadm 2026-03-10T13:46:38.874740+0000 mgr.a (mgr.14388) 17 : cephadm [INF] Updating vm00:/etc/ceph/ceph.client.admin.keyring 2026-03-10T13:46:40.999 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:46:40 vm07 bash[23044]: cephadm 2026-03-10T13:46:38.874740+0000 mgr.a (mgr.14388) 17 : cephadm [INF] Updating vm00:/etc/ceph/ceph.client.admin.keyring 2026-03-10T13:46:40.999 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:46:40 vm07 bash[23044]: cephadm 2026-03-10T13:46:38.874856+0000 mgr.a (mgr.14388) 18 : cephadm [INF] Updating vm08:/etc/ceph/ceph.client.admin.keyring 2026-03-10T13:46:40.999 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:46:40 vm07 bash[23044]: cephadm 2026-03-10T13:46:38.874856+0000 mgr.a (mgr.14388) 18 : cephadm [INF] Updating vm08:/etc/ceph/ceph.client.admin.keyring 2026-03-10T13:46:40.999 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:46:40 vm07 bash[23044]: cephadm 2026-03-10T13:46:38.876789+0000 mgr.a (mgr.14388) 19 : cephadm [INF] Updating vm07:/etc/ceph/ceph.client.admin.keyring 2026-03-10T13:46:40.999 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:46:40 vm07 bash[23044]: cephadm 2026-03-10T13:46:38.876789+0000 mgr.a (mgr.14388) 19 : cephadm [INF] Updating vm07:/etc/ceph/ceph.client.admin.keyring 2026-03-10T13:46:40.999 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:46:40 vm07 bash[23044]: cephadm 2026-03-10T13:46:38.911666+0000 mgr.a (mgr.14388) 20 : cephadm [INF] Updating vm00:/var/lib/ceph/c9620084-1c86-11f1-bcc5-e3fb709eab0a/config/ceph.client.admin.keyring 2026-03-10T13:46:40.999 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:46:40 vm07 bash[23044]: cephadm 2026-03-10T13:46:38.911666+0000 mgr.a (mgr.14388) 20 : cephadm [INF] Updating vm00:/var/lib/ceph/c9620084-1c86-11f1-bcc5-e3fb709eab0a/config/ceph.client.admin.keyring 2026-03-10T13:46:40.999 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:46:40 vm07 bash[23044]: cephadm 2026-03-10T13:46:38.913263+0000 mgr.a (mgr.14388) 21 : cephadm [INF] Updating vm08:/var/lib/ceph/c9620084-1c86-11f1-bcc5-e3fb709eab0a/config/ceph.client.admin.keyring 2026-03-10T13:46:40.999 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:46:40 vm07 bash[23044]: cephadm 2026-03-10T13:46:38.913263+0000 mgr.a (mgr.14388) 21 : cephadm [INF] Updating vm08:/var/lib/ceph/c9620084-1c86-11f1-bcc5-e3fb709eab0a/config/ceph.client.admin.keyring 2026-03-10T13:46:40.999 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:46:40 vm07 bash[23044]: cephadm 2026-03-10T13:46:38.914505+0000 mgr.a (mgr.14388) 22 : cephadm [INF] Updating vm07:/var/lib/ceph/c9620084-1c86-11f1-bcc5-e3fb709eab0a/config/ceph.client.admin.keyring 2026-03-10T13:46:40.999 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:46:40 vm07 bash[23044]: cephadm 2026-03-10T13:46:38.914505+0000 mgr.a (mgr.14388) 22 : cephadm [INF] Updating vm07:/var/lib/ceph/c9620084-1c86-11f1-bcc5-e3fb709eab0a/config/ceph.client.admin.keyring 2026-03-10T13:46:40.999 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:46:40 vm07 bash[23044]: cephadm 2026-03-10T13:46:38.997352+0000 mgr.a (mgr.14388) 23 : cephadm [INF] Reconfiguring grafana.vm00 (dependencies changed)... 2026-03-10T13:46:40.999 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:46:40 vm07 bash[23044]: cephadm 2026-03-10T13:46:38.997352+0000 mgr.a (mgr.14388) 23 : cephadm [INF] Reconfiguring grafana.vm00 (dependencies changed)... 2026-03-10T13:46:40.999 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:46:40 vm07 bash[23044]: cephadm 2026-03-10T13:46:39.030617+0000 mgr.a (mgr.14388) 24 : cephadm [INF] Reconfiguring daemon grafana.vm00 on vm00 2026-03-10T13:46:40.999 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:46:40 vm07 bash[23044]: cephadm 2026-03-10T13:46:39.030617+0000 mgr.a (mgr.14388) 24 : cephadm [INF] Reconfiguring daemon grafana.vm00 on vm00 2026-03-10T13:46:40.999 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:46:40 vm07 bash[23044]: audit 2026-03-10T13:46:39.610226+0000 mon.a (mon.0) 537 : audit [INF] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' 2026-03-10T13:46:40.999 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:46:40 vm07 bash[23044]: audit 2026-03-10T13:46:39.610226+0000 mon.a (mon.0) 537 : audit [INF] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' 2026-03-10T13:46:40.999 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:46:40 vm07 bash[23044]: audit 2026-03-10T13:46:39.615018+0000 mon.a (mon.0) 538 : audit [INF] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' 2026-03-10T13:46:40.999 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:46:40 vm07 bash[23044]: audit 2026-03-10T13:46:39.615018+0000 mon.a (mon.0) 538 : audit [INF] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' 2026-03-10T13:46:40.999 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:46:40 vm07 bash[23044]: cephadm 2026-03-10T13:46:39.615881+0000 mgr.a (mgr.14388) 25 : cephadm [INF] Reconfiguring alertmanager.vm08 (dependencies changed)... 2026-03-10T13:46:40.999 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:46:40 vm07 bash[23044]: cephadm 2026-03-10T13:46:39.615881+0000 mgr.a (mgr.14388) 25 : cephadm [INF] Reconfiguring alertmanager.vm08 (dependencies changed)... 2026-03-10T13:46:40.999 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:46:40 vm07 bash[23044]: cephadm 2026-03-10T13:46:39.620029+0000 mgr.a (mgr.14388) 26 : cephadm [INF] Reconfiguring daemon alertmanager.vm08 on vm08 2026-03-10T13:46:40.999 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:46:40 vm07 bash[23044]: cephadm 2026-03-10T13:46:39.620029+0000 mgr.a (mgr.14388) 26 : cephadm [INF] Reconfiguring daemon alertmanager.vm08 on vm08 2026-03-10T13:46:40.999 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:46:40 vm07 bash[23044]: audit 2026-03-10T13:46:40.204024+0000 mon.a (mon.0) 539 : audit [INF] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' 2026-03-10T13:46:40.999 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:46:40 vm07 bash[23044]: audit 2026-03-10T13:46:40.204024+0000 mon.a (mon.0) 539 : audit [INF] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' 2026-03-10T13:46:40.999 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:46:40 vm07 bash[23044]: audit 2026-03-10T13:46:40.210065+0000 mon.a (mon.0) 540 : audit [INF] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' 2026-03-10T13:46:40.999 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:46:40 vm07 bash[23044]: audit 2026-03-10T13:46:40.210065+0000 mon.a (mon.0) 540 : audit [INF] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' 2026-03-10T13:46:40.999 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:46:40 vm07 bash[23044]: audit 2026-03-10T13:46:40.212827+0000 mon.a (mon.0) 541 : audit [DBG] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' cmd=[{"prefix": "dashboard get-grafana-api-url"}]: dispatch 2026-03-10T13:46:40.999 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:46:40 vm07 bash[23044]: audit 2026-03-10T13:46:40.212827+0000 mon.a (mon.0) 541 : audit [DBG] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' cmd=[{"prefix": "dashboard get-grafana-api-url"}]: dispatch 2026-03-10T13:46:40.999 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:46:40 vm07 bash[23044]: audit 2026-03-10T13:46:40.213874+0000 mon.a (mon.0) 542 : audit [INF] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' cmd=[{"prefix": "dashboard set-grafana-api-url", "value": "https://vm00.local:3000"}]: dispatch 2026-03-10T13:46:41.000 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:46:40 vm07 bash[23044]: audit 2026-03-10T13:46:40.213874+0000 mon.a (mon.0) 542 : audit [INF] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' cmd=[{"prefix": "dashboard set-grafana-api-url", "value": "https://vm00.local:3000"}]: dispatch 2026-03-10T13:46:41.000 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:46:40 vm07 bash[23044]: audit 2026-03-10T13:46:40.219537+0000 mon.a (mon.0) 543 : audit [INF] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' 2026-03-10T13:46:41.000 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:46:40 vm07 bash[23044]: audit 2026-03-10T13:46:40.219537+0000 mon.a (mon.0) 543 : audit [INF] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' 2026-03-10T13:46:41.000 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:46:40 vm07 bash[23044]: audit 2026-03-10T13:46:40.227675+0000 mon.a (mon.0) 544 : audit [DBG] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' cmd=[{"prefix": "dashboard get-alertmanager-api-host"}]: dispatch 2026-03-10T13:46:41.000 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:46:40 vm07 bash[23044]: audit 2026-03-10T13:46:40.227675+0000 mon.a (mon.0) 544 : audit [DBG] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' cmd=[{"prefix": "dashboard get-alertmanager-api-host"}]: dispatch 2026-03-10T13:46:41.000 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:46:40 vm07 bash[23044]: audit 2026-03-10T13:46:40.228219+0000 mon.a (mon.0) 545 : audit [INF] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' cmd=[{"prefix": "dashboard set-alertmanager-api-host", "value": "http://vm08.local:9093"}]: dispatch 2026-03-10T13:46:41.000 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:46:40 vm07 bash[23044]: audit 2026-03-10T13:46:40.228219+0000 mon.a (mon.0) 545 : audit [INF] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' cmd=[{"prefix": "dashboard set-alertmanager-api-host", "value": "http://vm08.local:9093"}]: dispatch 2026-03-10T13:46:41.000 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:46:40 vm07 bash[23044]: audit 2026-03-10T13:46:40.231617+0000 mon.a (mon.0) 546 : audit [INF] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' 2026-03-10T13:46:41.000 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:46:40 vm07 bash[23044]: audit 2026-03-10T13:46:40.231617+0000 mon.a (mon.0) 546 : audit [INF] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' 2026-03-10T13:46:41.000 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:46:40 vm07 bash[23044]: audit 2026-03-10T13:46:40.265174+0000 mon.a (mon.0) 547 : audit [DBG] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T13:46:41.000 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:46:40 vm07 bash[23044]: audit 2026-03-10T13:46:40.265174+0000 mon.a (mon.0) 547 : audit [DBG] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T13:46:41.967 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:46:41 vm00 bash[20748]: audit 2026-03-10T13:46:40.213128+0000 mgr.a (mgr.14388) 27 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard get-grafana-api-url"}]: dispatch 2026-03-10T13:46:41.967 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:46:41 vm00 bash[20748]: audit 2026-03-10T13:46:40.213128+0000 mgr.a (mgr.14388) 27 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard get-grafana-api-url"}]: dispatch 2026-03-10T13:46:41.967 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:46:41 vm00 bash[20748]: audit 2026-03-10T13:46:40.214050+0000 mgr.a (mgr.14388) 28 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard set-grafana-api-url", "value": "https://vm00.local:3000"}]: dispatch 2026-03-10T13:46:41.967 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:46:41 vm00 bash[20748]: audit 2026-03-10T13:46:40.214050+0000 mgr.a (mgr.14388) 28 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard set-grafana-api-url", "value": "https://vm00.local:3000"}]: dispatch 2026-03-10T13:46:41.967 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:46:41 vm00 bash[20748]: audit 2026-03-10T13:46:40.227811+0000 mgr.a (mgr.14388) 29 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard get-alertmanager-api-host"}]: dispatch 2026-03-10T13:46:41.967 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:46:41 vm00 bash[20748]: audit 2026-03-10T13:46:40.227811+0000 mgr.a (mgr.14388) 29 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard get-alertmanager-api-host"}]: dispatch 2026-03-10T13:46:41.967 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:46:41 vm00 bash[20748]: audit 2026-03-10T13:46:40.228343+0000 mgr.a (mgr.14388) 30 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard set-alertmanager-api-host", "value": "http://vm08.local:9093"}]: dispatch 2026-03-10T13:46:41.967 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:46:41 vm00 bash[20748]: audit 2026-03-10T13:46:40.228343+0000 mgr.a (mgr.14388) 30 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard set-alertmanager-api-host", "value": "http://vm08.local:9093"}]: dispatch 2026-03-10T13:46:41.998 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:46:41 vm07 bash[23044]: audit 2026-03-10T13:46:40.213128+0000 mgr.a (mgr.14388) 27 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard get-grafana-api-url"}]: dispatch 2026-03-10T13:46:41.998 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:46:41 vm07 bash[23044]: audit 2026-03-10T13:46:40.213128+0000 mgr.a (mgr.14388) 27 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard get-grafana-api-url"}]: dispatch 2026-03-10T13:46:41.998 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:46:41 vm07 bash[23044]: audit 2026-03-10T13:46:40.214050+0000 mgr.a (mgr.14388) 28 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard set-grafana-api-url", "value": "https://vm00.local:3000"}]: dispatch 2026-03-10T13:46:41.998 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:46:41 vm07 bash[23044]: audit 2026-03-10T13:46:40.214050+0000 mgr.a (mgr.14388) 28 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard set-grafana-api-url", "value": "https://vm00.local:3000"}]: dispatch 2026-03-10T13:46:41.998 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:46:41 vm07 bash[23044]: audit 2026-03-10T13:46:40.227811+0000 mgr.a (mgr.14388) 29 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard get-alertmanager-api-host"}]: dispatch 2026-03-10T13:46:41.998 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:46:41 vm07 bash[23044]: audit 2026-03-10T13:46:40.227811+0000 mgr.a (mgr.14388) 29 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard get-alertmanager-api-host"}]: dispatch 2026-03-10T13:46:41.998 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:46:41 vm07 bash[23044]: audit 2026-03-10T13:46:40.228343+0000 mgr.a (mgr.14388) 30 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard set-alertmanager-api-host", "value": "http://vm08.local:9093"}]: dispatch 2026-03-10T13:46:41.998 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:46:41 vm07 bash[23044]: audit 2026-03-10T13:46:40.228343+0000 mgr.a (mgr.14388) 30 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard set-alertmanager-api-host", "value": "http://vm08.local:9093"}]: dispatch 2026-03-10T13:46:42.088 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:46:41 vm08 bash[23387]: audit 2026-03-10T13:46:40.213128+0000 mgr.a (mgr.14388) 27 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard get-grafana-api-url"}]: dispatch 2026-03-10T13:46:42.088 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:46:41 vm08 bash[23387]: audit 2026-03-10T13:46:40.213128+0000 mgr.a (mgr.14388) 27 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard get-grafana-api-url"}]: dispatch 2026-03-10T13:46:42.088 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:46:41 vm08 bash[23387]: audit 2026-03-10T13:46:40.214050+0000 mgr.a (mgr.14388) 28 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard set-grafana-api-url", "value": "https://vm00.local:3000"}]: dispatch 2026-03-10T13:46:42.088 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:46:41 vm08 bash[23387]: audit 2026-03-10T13:46:40.214050+0000 mgr.a (mgr.14388) 28 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard set-grafana-api-url", "value": "https://vm00.local:3000"}]: dispatch 2026-03-10T13:46:42.088 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:46:41 vm08 bash[23387]: audit 2026-03-10T13:46:40.227811+0000 mgr.a (mgr.14388) 29 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard get-alertmanager-api-host"}]: dispatch 2026-03-10T13:46:42.088 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:46:41 vm08 bash[23387]: audit 2026-03-10T13:46:40.227811+0000 mgr.a (mgr.14388) 29 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard get-alertmanager-api-host"}]: dispatch 2026-03-10T13:46:42.088 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:46:41 vm08 bash[23387]: audit 2026-03-10T13:46:40.228343+0000 mgr.a (mgr.14388) 30 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard set-alertmanager-api-host", "value": "http://vm08.local:9093"}]: dispatch 2026-03-10T13:46:42.088 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:46:41 vm08 bash[23387]: audit 2026-03-10T13:46:40.228343+0000 mgr.a (mgr.14388) 30 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard set-alertmanager-api-host", "value": "http://vm08.local:9093"}]: dispatch 2026-03-10T13:46:42.967 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:46:42 vm00 bash[20748]: cluster 2026-03-10T13:46:40.650900+0000 mgr.a (mgr.14388) 31 : cluster [DBG] pgmap v7: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 25 KiB/s rd, 0 B/s wr, 10 op/s 2026-03-10T13:46:42.967 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:46:42 vm00 bash[20748]: cluster 2026-03-10T13:46:40.650900+0000 mgr.a (mgr.14388) 31 : cluster [DBG] pgmap v7: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 25 KiB/s rd, 0 B/s wr, 10 op/s 2026-03-10T13:46:42.998 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:46:42 vm07 bash[23044]: cluster 2026-03-10T13:46:40.650900+0000 mgr.a (mgr.14388) 31 : cluster [DBG] pgmap v7: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 25 KiB/s rd, 0 B/s wr, 10 op/s 2026-03-10T13:46:42.998 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:46:42 vm07 bash[23044]: cluster 2026-03-10T13:46:40.650900+0000 mgr.a (mgr.14388) 31 : cluster [DBG] pgmap v7: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 25 KiB/s rd, 0 B/s wr, 10 op/s 2026-03-10T13:46:43.088 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:46:42 vm08 bash[23387]: cluster 2026-03-10T13:46:40.650900+0000 mgr.a (mgr.14388) 31 : cluster [DBG] pgmap v7: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 25 KiB/s rd, 0 B/s wr, 10 op/s 2026-03-10T13:46:43.088 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:46:42 vm08 bash[23387]: cluster 2026-03-10T13:46:40.650900+0000 mgr.a (mgr.14388) 31 : cluster [DBG] pgmap v7: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 25 KiB/s rd, 0 B/s wr, 10 op/s 2026-03-10T13:46:44.967 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:46:44 vm00 bash[20748]: cluster 2026-03-10T13:46:42.651123+0000 mgr.a (mgr.14388) 32 : cluster [DBG] pgmap v8: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 0 B/s wr, 8 op/s 2026-03-10T13:46:44.967 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:46:44 vm00 bash[20748]: cluster 2026-03-10T13:46:42.651123+0000 mgr.a (mgr.14388) 32 : cluster [DBG] pgmap v8: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 0 B/s wr, 8 op/s 2026-03-10T13:46:44.998 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:46:44 vm07 bash[23044]: cluster 2026-03-10T13:46:42.651123+0000 mgr.a (mgr.14388) 32 : cluster [DBG] pgmap v8: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 0 B/s wr, 8 op/s 2026-03-10T13:46:44.998 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:46:44 vm07 bash[23044]: cluster 2026-03-10T13:46:42.651123+0000 mgr.a (mgr.14388) 32 : cluster [DBG] pgmap v8: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 0 B/s wr, 8 op/s 2026-03-10T13:46:45.088 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:46:44 vm08 bash[23387]: cluster 2026-03-10T13:46:42.651123+0000 mgr.a (mgr.14388) 32 : cluster [DBG] pgmap v8: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 0 B/s wr, 8 op/s 2026-03-10T13:46:45.088 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:46:44 vm08 bash[23387]: cluster 2026-03-10T13:46:42.651123+0000 mgr.a (mgr.14388) 32 : cluster [DBG] pgmap v8: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 0 B/s wr, 8 op/s 2026-03-10T13:46:46.588 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:46:46 vm08 bash[23387]: cluster 2026-03-10T13:46:44.651340+0000 mgr.a (mgr.14388) 33 : cluster [DBG] pgmap v9: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 16 KiB/s rd, 0 B/s wr, 6 op/s 2026-03-10T13:46:46.588 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:46:46 vm08 bash[23387]: cluster 2026-03-10T13:46:44.651340+0000 mgr.a (mgr.14388) 33 : cluster [DBG] pgmap v9: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 16 KiB/s rd, 0 B/s wr, 6 op/s 2026-03-10T13:46:46.588 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:46:46 vm08 bash[23387]: audit 2026-03-10T13:46:45.261536+0000 mon.a (mon.0) 548 : audit [INF] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' 2026-03-10T13:46:46.588 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:46:46 vm08 bash[23387]: audit 2026-03-10T13:46:45.261536+0000 mon.a (mon.0) 548 : audit [INF] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' 2026-03-10T13:46:46.588 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:46:46 vm08 bash[23387]: audit 2026-03-10T13:46:45.266407+0000 mon.a (mon.0) 549 : audit [INF] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' 2026-03-10T13:46:46.588 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:46:46 vm08 bash[23387]: audit 2026-03-10T13:46:45.266407+0000 mon.a (mon.0) 549 : audit [INF] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' 2026-03-10T13:46:46.588 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:46:46 vm08 bash[23387]: audit 2026-03-10T13:46:45.375037+0000 mon.a (mon.0) 550 : audit [INF] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' 2026-03-10T13:46:46.588 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:46:46 vm08 bash[23387]: audit 2026-03-10T13:46:45.375037+0000 mon.a (mon.0) 550 : audit [INF] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' 2026-03-10T13:46:46.588 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:46:46 vm08 bash[23387]: audit 2026-03-10T13:46:45.378893+0000 mon.a (mon.0) 551 : audit [INF] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' 2026-03-10T13:46:46.588 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:46:46 vm08 bash[23387]: audit 2026-03-10T13:46:45.378893+0000 mon.a (mon.0) 551 : audit [INF] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' 2026-03-10T13:46:46.588 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:46:46 vm08 bash[23387]: audit 2026-03-10T13:46:45.379745+0000 mon.a (mon.0) 552 : audit [DBG] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T13:46:46.588 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:46:46 vm08 bash[23387]: audit 2026-03-10T13:46:45.379745+0000 mon.a (mon.0) 552 : audit [DBG] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T13:46:46.588 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:46:46 vm08 bash[23387]: audit 2026-03-10T13:46:45.380154+0000 mon.a (mon.0) 553 : audit [INF] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T13:46:46.588 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:46:46 vm08 bash[23387]: audit 2026-03-10T13:46:45.380154+0000 mon.a (mon.0) 553 : audit [INF] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T13:46:46.588 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:46:46 vm08 bash[23387]: audit 2026-03-10T13:46:45.383297+0000 mon.a (mon.0) 554 : audit [INF] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' 2026-03-10T13:46:46.588 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:46:46 vm08 bash[23387]: audit 2026-03-10T13:46:45.383297+0000 mon.a (mon.0) 554 : audit [INF] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' 2026-03-10T13:46:46.625 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:46:46 vm00 bash[20748]: cluster 2026-03-10T13:46:44.651340+0000 mgr.a (mgr.14388) 33 : cluster [DBG] pgmap v9: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 16 KiB/s rd, 0 B/s wr, 6 op/s 2026-03-10T13:46:46.625 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:46:46 vm00 bash[20748]: cluster 2026-03-10T13:46:44.651340+0000 mgr.a (mgr.14388) 33 : cluster [DBG] pgmap v9: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 16 KiB/s rd, 0 B/s wr, 6 op/s 2026-03-10T13:46:46.625 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:46:46 vm00 bash[20748]: audit 2026-03-10T13:46:45.261536+0000 mon.a (mon.0) 548 : audit [INF] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' 2026-03-10T13:46:46.626 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:46:46 vm00 bash[20748]: audit 2026-03-10T13:46:45.261536+0000 mon.a (mon.0) 548 : audit [INF] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' 2026-03-10T13:46:46.626 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:46:46 vm00 bash[20748]: audit 2026-03-10T13:46:45.266407+0000 mon.a (mon.0) 549 : audit [INF] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' 2026-03-10T13:46:46.626 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:46:46 vm00 bash[20748]: audit 2026-03-10T13:46:45.266407+0000 mon.a (mon.0) 549 : audit [INF] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' 2026-03-10T13:46:46.626 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:46:46 vm00 bash[20748]: audit 2026-03-10T13:46:45.375037+0000 mon.a (mon.0) 550 : audit [INF] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' 2026-03-10T13:46:46.626 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:46:46 vm00 bash[20748]: audit 2026-03-10T13:46:45.375037+0000 mon.a (mon.0) 550 : audit [INF] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' 2026-03-10T13:46:46.626 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:46:46 vm00 bash[20748]: audit 2026-03-10T13:46:45.378893+0000 mon.a (mon.0) 551 : audit [INF] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' 2026-03-10T13:46:46.626 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:46:46 vm00 bash[20748]: audit 2026-03-10T13:46:45.378893+0000 mon.a (mon.0) 551 : audit [INF] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' 2026-03-10T13:46:46.626 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:46:46 vm00 bash[20748]: audit 2026-03-10T13:46:45.379745+0000 mon.a (mon.0) 552 : audit [DBG] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T13:46:46.626 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:46:46 vm00 bash[20748]: audit 2026-03-10T13:46:45.379745+0000 mon.a (mon.0) 552 : audit [DBG] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T13:46:46.626 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:46:46 vm00 bash[20748]: audit 2026-03-10T13:46:45.380154+0000 mon.a (mon.0) 553 : audit [INF] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T13:46:46.626 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:46:46 vm00 bash[20748]: audit 2026-03-10T13:46:45.380154+0000 mon.a (mon.0) 553 : audit [INF] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T13:46:46.626 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:46:46 vm00 bash[20748]: audit 2026-03-10T13:46:45.383297+0000 mon.a (mon.0) 554 : audit [INF] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' 2026-03-10T13:46:46.626 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:46:46 vm00 bash[20748]: audit 2026-03-10T13:46:45.383297+0000 mon.a (mon.0) 554 : audit [INF] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' 2026-03-10T13:46:46.748 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:46:46 vm07 bash[23044]: cluster 2026-03-10T13:46:44.651340+0000 mgr.a (mgr.14388) 33 : cluster [DBG] pgmap v9: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 16 KiB/s rd, 0 B/s wr, 6 op/s 2026-03-10T13:46:46.748 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:46:46 vm07 bash[23044]: cluster 2026-03-10T13:46:44.651340+0000 mgr.a (mgr.14388) 33 : cluster [DBG] pgmap v9: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 16 KiB/s rd, 0 B/s wr, 6 op/s 2026-03-10T13:46:46.748 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:46:46 vm07 bash[23044]: audit 2026-03-10T13:46:45.261536+0000 mon.a (mon.0) 548 : audit [INF] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' 2026-03-10T13:46:46.748 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:46:46 vm07 bash[23044]: audit 2026-03-10T13:46:45.261536+0000 mon.a (mon.0) 548 : audit [INF] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' 2026-03-10T13:46:46.748 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:46:46 vm07 bash[23044]: audit 2026-03-10T13:46:45.266407+0000 mon.a (mon.0) 549 : audit [INF] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' 2026-03-10T13:46:46.748 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:46:46 vm07 bash[23044]: audit 2026-03-10T13:46:45.266407+0000 mon.a (mon.0) 549 : audit [INF] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' 2026-03-10T13:46:46.748 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:46:46 vm07 bash[23044]: audit 2026-03-10T13:46:45.375037+0000 mon.a (mon.0) 550 : audit [INF] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' 2026-03-10T13:46:46.748 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:46:46 vm07 bash[23044]: audit 2026-03-10T13:46:45.375037+0000 mon.a (mon.0) 550 : audit [INF] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' 2026-03-10T13:46:46.748 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:46:46 vm07 bash[23044]: audit 2026-03-10T13:46:45.378893+0000 mon.a (mon.0) 551 : audit [INF] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' 2026-03-10T13:46:46.748 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:46:46 vm07 bash[23044]: audit 2026-03-10T13:46:45.378893+0000 mon.a (mon.0) 551 : audit [INF] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' 2026-03-10T13:46:46.748 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:46:46 vm07 bash[23044]: audit 2026-03-10T13:46:45.379745+0000 mon.a (mon.0) 552 : audit [DBG] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T13:46:46.748 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:46:46 vm07 bash[23044]: audit 2026-03-10T13:46:45.379745+0000 mon.a (mon.0) 552 : audit [DBG] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T13:46:46.748 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:46:46 vm07 bash[23044]: audit 2026-03-10T13:46:45.380154+0000 mon.a (mon.0) 553 : audit [INF] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T13:46:46.748 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:46:46 vm07 bash[23044]: audit 2026-03-10T13:46:45.380154+0000 mon.a (mon.0) 553 : audit [INF] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T13:46:46.748 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:46:46 vm07 bash[23044]: audit 2026-03-10T13:46:45.383297+0000 mon.a (mon.0) 554 : audit [INF] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' 2026-03-10T13:46:46.748 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:46:46 vm07 bash[23044]: audit 2026-03-10T13:46:45.383297+0000 mon.a (mon.0) 554 : audit [INF] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' 2026-03-10T13:46:46.967 INFO:journalctl@ceph.mgr.a.vm00.stdout:Mar 10 13:46:46 vm00 bash[21015]: ::ffff:192.168.123.107 - - [10/Mar/2026:13:46:46] "GET /metrics HTTP/1.1" 200 20061 "" "Prometheus/2.51.0" 2026-03-10T13:46:48.588 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:46:48 vm08 bash[23387]: cluster 2026-03-10T13:46:46.651529+0000 mgr.a (mgr.14388) 34 : cluster [DBG] pgmap v10: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 15 KiB/s rd, 0 B/s wr, 5 op/s 2026-03-10T13:46:48.588 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:46:48 vm08 bash[23387]: cluster 2026-03-10T13:46:46.651529+0000 mgr.a (mgr.14388) 34 : cluster [DBG] pgmap v10: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 15 KiB/s rd, 0 B/s wr, 5 op/s 2026-03-10T13:46:48.588 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:46:48 vm08 bash[23387]: audit 2026-03-10T13:46:47.693098+0000 mon.a (mon.0) 555 : audit [DBG] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T13:46:48.588 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:46:48 vm08 bash[23387]: audit 2026-03-10T13:46:47.693098+0000 mon.a (mon.0) 555 : audit [DBG] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T13:46:48.717 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:46:48 vm00 bash[20748]: cluster 2026-03-10T13:46:46.651529+0000 mgr.a (mgr.14388) 34 : cluster [DBG] pgmap v10: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 15 KiB/s rd, 0 B/s wr, 5 op/s 2026-03-10T13:46:48.717 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:46:48 vm00 bash[20748]: cluster 2026-03-10T13:46:46.651529+0000 mgr.a (mgr.14388) 34 : cluster [DBG] pgmap v10: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 15 KiB/s rd, 0 B/s wr, 5 op/s 2026-03-10T13:46:48.717 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:46:48 vm00 bash[20748]: audit 2026-03-10T13:46:47.693098+0000 mon.a (mon.0) 555 : audit [DBG] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T13:46:48.717 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:46:48 vm00 bash[20748]: audit 2026-03-10T13:46:47.693098+0000 mon.a (mon.0) 555 : audit [DBG] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T13:46:48.748 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:46:48 vm07 bash[23044]: cluster 2026-03-10T13:46:46.651529+0000 mgr.a (mgr.14388) 34 : cluster [DBG] pgmap v10: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 15 KiB/s rd, 0 B/s wr, 5 op/s 2026-03-10T13:46:48.748 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:46:48 vm07 bash[23044]: cluster 2026-03-10T13:46:46.651529+0000 mgr.a (mgr.14388) 34 : cluster [DBG] pgmap v10: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 15 KiB/s rd, 0 B/s wr, 5 op/s 2026-03-10T13:46:48.748 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:46:48 vm07 bash[23044]: audit 2026-03-10T13:46:47.693098+0000 mon.a (mon.0) 555 : audit [DBG] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T13:46:48.748 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:46:48 vm07 bash[23044]: audit 2026-03-10T13:46:47.693098+0000 mon.a (mon.0) 555 : audit [DBG] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T13:46:50.588 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:46:50 vm08 bash[23387]: cluster 2026-03-10T13:46:48.651695+0000 mgr.a (mgr.14388) 35 : cluster [DBG] pgmap v11: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 15 KiB/s rd, 0 B/s wr, 5 op/s 2026-03-10T13:46:50.588 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:46:50 vm08 bash[23387]: cluster 2026-03-10T13:46:48.651695+0000 mgr.a (mgr.14388) 35 : cluster [DBG] pgmap v11: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 15 KiB/s rd, 0 B/s wr, 5 op/s 2026-03-10T13:46:50.717 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:46:50 vm00 bash[20748]: cluster 2026-03-10T13:46:48.651695+0000 mgr.a (mgr.14388) 35 : cluster [DBG] pgmap v11: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 15 KiB/s rd, 0 B/s wr, 5 op/s 2026-03-10T13:46:50.717 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:46:50 vm00 bash[20748]: cluster 2026-03-10T13:46:48.651695+0000 mgr.a (mgr.14388) 35 : cluster [DBG] pgmap v11: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 15 KiB/s rd, 0 B/s wr, 5 op/s 2026-03-10T13:46:50.748 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:46:50 vm07 bash[23044]: cluster 2026-03-10T13:46:48.651695+0000 mgr.a (mgr.14388) 35 : cluster [DBG] pgmap v11: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 15 KiB/s rd, 0 B/s wr, 5 op/s 2026-03-10T13:46:50.748 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:46:50 vm07 bash[23044]: cluster 2026-03-10T13:46:48.651695+0000 mgr.a (mgr.14388) 35 : cluster [DBG] pgmap v11: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 15 KiB/s rd, 0 B/s wr, 5 op/s 2026-03-10T13:46:52.588 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:46:52 vm08 bash[23387]: cluster 2026-03-10T13:46:50.651884+0000 mgr.a (mgr.14388) 36 : cluster [DBG] pgmap v12: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 15 KiB/s rd, 0 B/s wr, 5 op/s 2026-03-10T13:46:52.588 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:46:52 vm08 bash[23387]: cluster 2026-03-10T13:46:50.651884+0000 mgr.a (mgr.14388) 36 : cluster [DBG] pgmap v12: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 15 KiB/s rd, 0 B/s wr, 5 op/s 2026-03-10T13:46:52.717 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:46:52 vm00 bash[20748]: cluster 2026-03-10T13:46:50.651884+0000 mgr.a (mgr.14388) 36 : cluster [DBG] pgmap v12: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 15 KiB/s rd, 0 B/s wr, 5 op/s 2026-03-10T13:46:52.717 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:46:52 vm00 bash[20748]: cluster 2026-03-10T13:46:50.651884+0000 mgr.a (mgr.14388) 36 : cluster [DBG] pgmap v12: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 15 KiB/s rd, 0 B/s wr, 5 op/s 2026-03-10T13:46:52.748 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:46:52 vm07 bash[23044]: cluster 2026-03-10T13:46:50.651884+0000 mgr.a (mgr.14388) 36 : cluster [DBG] pgmap v12: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 15 KiB/s rd, 0 B/s wr, 5 op/s 2026-03-10T13:46:52.748 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:46:52 vm07 bash[23044]: cluster 2026-03-10T13:46:50.651884+0000 mgr.a (mgr.14388) 36 : cluster [DBG] pgmap v12: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 15 KiB/s rd, 0 B/s wr, 5 op/s 2026-03-10T13:46:54.588 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:46:54 vm08 bash[23387]: cluster 2026-03-10T13:46:52.652062+0000 mgr.a (mgr.14388) 37 : cluster [DBG] pgmap v13: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:46:54.588 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:46:54 vm08 bash[23387]: cluster 2026-03-10T13:46:52.652062+0000 mgr.a (mgr.14388) 37 : cluster [DBG] pgmap v13: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:46:54.717 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:46:54 vm00 bash[20748]: cluster 2026-03-10T13:46:52.652062+0000 mgr.a (mgr.14388) 37 : cluster [DBG] pgmap v13: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:46:54.717 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:46:54 vm00 bash[20748]: cluster 2026-03-10T13:46:52.652062+0000 mgr.a (mgr.14388) 37 : cluster [DBG] pgmap v13: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:46:54.748 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:46:54 vm07 bash[23044]: cluster 2026-03-10T13:46:52.652062+0000 mgr.a (mgr.14388) 37 : cluster [DBG] pgmap v13: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:46:54.748 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:46:54 vm07 bash[23044]: cluster 2026-03-10T13:46:52.652062+0000 mgr.a (mgr.14388) 37 : cluster [DBG] pgmap v13: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:46:56.588 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:46:56 vm08 bash[23387]: cluster 2026-03-10T13:46:54.652225+0000 mgr.a (mgr.14388) 38 : cluster [DBG] pgmap v14: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:46:56.588 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:46:56 vm08 bash[23387]: cluster 2026-03-10T13:46:54.652225+0000 mgr.a (mgr.14388) 38 : cluster [DBG] pgmap v14: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:46:56.624 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:46:56 vm00 bash[20748]: cluster 2026-03-10T13:46:54.652225+0000 mgr.a (mgr.14388) 38 : cluster [DBG] pgmap v14: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:46:56.624 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:46:56 vm00 bash[20748]: cluster 2026-03-10T13:46:54.652225+0000 mgr.a (mgr.14388) 38 : cluster [DBG] pgmap v14: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:46:56.748 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:46:56 vm07 bash[23044]: cluster 2026-03-10T13:46:54.652225+0000 mgr.a (mgr.14388) 38 : cluster [DBG] pgmap v14: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:46:56.748 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:46:56 vm07 bash[23044]: cluster 2026-03-10T13:46:54.652225+0000 mgr.a (mgr.14388) 38 : cluster [DBG] pgmap v14: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:46:56.967 INFO:journalctl@ceph.mgr.a.vm00.stdout:Mar 10 13:46:56 vm00 bash[21015]: ::ffff:192.168.123.107 - - [10/Mar/2026:13:46:56] "GET /metrics HTTP/1.1" 200 21327 "" "Prometheus/2.51.0" 2026-03-10T13:46:58.588 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:46:58 vm08 bash[23387]: cluster 2026-03-10T13:46:56.652386+0000 mgr.a (mgr.14388) 39 : cluster [DBG] pgmap v15: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:46:58.588 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:46:58 vm08 bash[23387]: cluster 2026-03-10T13:46:56.652386+0000 mgr.a (mgr.14388) 39 : cluster [DBG] pgmap v15: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:46:58.717 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:46:58 vm00 bash[20748]: cluster 2026-03-10T13:46:56.652386+0000 mgr.a (mgr.14388) 39 : cluster [DBG] pgmap v15: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:46:58.717 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:46:58 vm00 bash[20748]: cluster 2026-03-10T13:46:56.652386+0000 mgr.a (mgr.14388) 39 : cluster [DBG] pgmap v15: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:46:58.748 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:46:58 vm07 bash[23044]: cluster 2026-03-10T13:46:56.652386+0000 mgr.a (mgr.14388) 39 : cluster [DBG] pgmap v15: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:46:58.748 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:46:58 vm07 bash[23044]: cluster 2026-03-10T13:46:56.652386+0000 mgr.a (mgr.14388) 39 : cluster [DBG] pgmap v15: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:47:00.588 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:47:00 vm08 bash[23387]: cluster 2026-03-10T13:46:58.652600+0000 mgr.a (mgr.14388) 40 : cluster [DBG] pgmap v16: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:47:00.588 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:47:00 vm08 bash[23387]: cluster 2026-03-10T13:46:58.652600+0000 mgr.a (mgr.14388) 40 : cluster [DBG] pgmap v16: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:47:00.717 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:47:00 vm00 bash[20748]: cluster 2026-03-10T13:46:58.652600+0000 mgr.a (mgr.14388) 40 : cluster [DBG] pgmap v16: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:47:00.717 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:47:00 vm00 bash[20748]: cluster 2026-03-10T13:46:58.652600+0000 mgr.a (mgr.14388) 40 : cluster [DBG] pgmap v16: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:47:00.748 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:47:00 vm07 bash[23044]: cluster 2026-03-10T13:46:58.652600+0000 mgr.a (mgr.14388) 40 : cluster [DBG] pgmap v16: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:47:00.748 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:47:00 vm07 bash[23044]: cluster 2026-03-10T13:46:58.652600+0000 mgr.a (mgr.14388) 40 : cluster [DBG] pgmap v16: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:47:02.588 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:47:02 vm08 bash[23387]: cluster 2026-03-10T13:47:00.652840+0000 mgr.a (mgr.14388) 41 : cluster [DBG] pgmap v17: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:47:02.588 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:47:02 vm08 bash[23387]: cluster 2026-03-10T13:47:00.652840+0000 mgr.a (mgr.14388) 41 : cluster [DBG] pgmap v17: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:47:02.717 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:47:02 vm00 bash[20748]: cluster 2026-03-10T13:47:00.652840+0000 mgr.a (mgr.14388) 41 : cluster [DBG] pgmap v17: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:47:02.717 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:47:02 vm00 bash[20748]: cluster 2026-03-10T13:47:00.652840+0000 mgr.a (mgr.14388) 41 : cluster [DBG] pgmap v17: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:47:02.748 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:47:02 vm07 bash[23044]: cluster 2026-03-10T13:47:00.652840+0000 mgr.a (mgr.14388) 41 : cluster [DBG] pgmap v17: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:47:02.748 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:47:02 vm07 bash[23044]: cluster 2026-03-10T13:47:00.652840+0000 mgr.a (mgr.14388) 41 : cluster [DBG] pgmap v17: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:47:03.588 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:47:03 vm08 bash[23387]: audit 2026-03-10T13:47:02.693307+0000 mon.a (mon.0) 556 : audit [DBG] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T13:47:03.588 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:47:03 vm08 bash[23387]: audit 2026-03-10T13:47:02.693307+0000 mon.a (mon.0) 556 : audit [DBG] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T13:47:03.717 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:47:03 vm00 bash[20748]: audit 2026-03-10T13:47:02.693307+0000 mon.a (mon.0) 556 : audit [DBG] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T13:47:03.717 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:47:03 vm00 bash[20748]: audit 2026-03-10T13:47:02.693307+0000 mon.a (mon.0) 556 : audit [DBG] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T13:47:03.748 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:47:03 vm07 bash[23044]: audit 2026-03-10T13:47:02.693307+0000 mon.a (mon.0) 556 : audit [DBG] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T13:47:03.748 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:47:03 vm07 bash[23044]: audit 2026-03-10T13:47:02.693307+0000 mon.a (mon.0) 556 : audit [DBG] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T13:47:04.588 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:47:04 vm08 bash[23387]: cluster 2026-03-10T13:47:02.653070+0000 mgr.a (mgr.14388) 42 : cluster [DBG] pgmap v18: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:47:04.588 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:47:04 vm08 bash[23387]: cluster 2026-03-10T13:47:02.653070+0000 mgr.a (mgr.14388) 42 : cluster [DBG] pgmap v18: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:47:04.717 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:47:04 vm00 bash[20748]: cluster 2026-03-10T13:47:02.653070+0000 mgr.a (mgr.14388) 42 : cluster [DBG] pgmap v18: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:47:04.717 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:47:04 vm00 bash[20748]: cluster 2026-03-10T13:47:02.653070+0000 mgr.a (mgr.14388) 42 : cluster [DBG] pgmap v18: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:47:04.748 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:47:04 vm07 bash[23044]: cluster 2026-03-10T13:47:02.653070+0000 mgr.a (mgr.14388) 42 : cluster [DBG] pgmap v18: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:47:04.748 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:47:04 vm07 bash[23044]: cluster 2026-03-10T13:47:02.653070+0000 mgr.a (mgr.14388) 42 : cluster [DBG] pgmap v18: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:47:06.588 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:47:06 vm08 bash[23387]: cluster 2026-03-10T13:47:04.653280+0000 mgr.a (mgr.14388) 43 : cluster [DBG] pgmap v19: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:47:06.588 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:47:06 vm08 bash[23387]: cluster 2026-03-10T13:47:04.653280+0000 mgr.a (mgr.14388) 43 : cluster [DBG] pgmap v19: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:47:06.624 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:47:06 vm00 bash[20748]: cluster 2026-03-10T13:47:04.653280+0000 mgr.a (mgr.14388) 43 : cluster [DBG] pgmap v19: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:47:06.624 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:47:06 vm00 bash[20748]: cluster 2026-03-10T13:47:04.653280+0000 mgr.a (mgr.14388) 43 : cluster [DBG] pgmap v19: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:47:06.748 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:47:06 vm07 bash[23044]: cluster 2026-03-10T13:47:04.653280+0000 mgr.a (mgr.14388) 43 : cluster [DBG] pgmap v19: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:47:06.748 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:47:06 vm07 bash[23044]: cluster 2026-03-10T13:47:04.653280+0000 mgr.a (mgr.14388) 43 : cluster [DBG] pgmap v19: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:47:06.967 INFO:journalctl@ceph.mgr.a.vm00.stdout:Mar 10 13:47:06 vm00 bash[21015]: ::ffff:192.168.123.107 - - [10/Mar/2026:13:47:06] "GET /metrics HTTP/1.1" 200 21325 "" "Prometheus/2.51.0" 2026-03-10T13:47:08.588 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:47:08 vm08 bash[23387]: cluster 2026-03-10T13:47:06.653445+0000 mgr.a (mgr.14388) 44 : cluster [DBG] pgmap v20: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:47:08.588 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:47:08 vm08 bash[23387]: cluster 2026-03-10T13:47:06.653445+0000 mgr.a (mgr.14388) 44 : cluster [DBG] pgmap v20: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:47:08.717 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:47:08 vm00 bash[20748]: cluster 2026-03-10T13:47:06.653445+0000 mgr.a (mgr.14388) 44 : cluster [DBG] pgmap v20: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:47:08.717 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:47:08 vm00 bash[20748]: cluster 2026-03-10T13:47:06.653445+0000 mgr.a (mgr.14388) 44 : cluster [DBG] pgmap v20: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:47:08.748 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:47:08 vm07 bash[23044]: cluster 2026-03-10T13:47:06.653445+0000 mgr.a (mgr.14388) 44 : cluster [DBG] pgmap v20: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:47:08.748 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:47:08 vm07 bash[23044]: cluster 2026-03-10T13:47:06.653445+0000 mgr.a (mgr.14388) 44 : cluster [DBG] pgmap v20: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:47:10.717 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:47:10 vm00 bash[20748]: cluster 2026-03-10T13:47:08.653673+0000 mgr.a (mgr.14388) 45 : cluster [DBG] pgmap v21: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:47:10.717 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:47:10 vm00 bash[20748]: cluster 2026-03-10T13:47:08.653673+0000 mgr.a (mgr.14388) 45 : cluster [DBG] pgmap v21: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:47:10.748 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:47:10 vm07 bash[23044]: cluster 2026-03-10T13:47:08.653673+0000 mgr.a (mgr.14388) 45 : cluster [DBG] pgmap v21: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:47:10.748 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:47:10 vm07 bash[23044]: cluster 2026-03-10T13:47:08.653673+0000 mgr.a (mgr.14388) 45 : cluster [DBG] pgmap v21: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:47:10.838 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:47:10 vm08 bash[23387]: cluster 2026-03-10T13:47:08.653673+0000 mgr.a (mgr.14388) 45 : cluster [DBG] pgmap v21: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:47:10.838 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:47:10 vm08 bash[23387]: cluster 2026-03-10T13:47:08.653673+0000 mgr.a (mgr.14388) 45 : cluster [DBG] pgmap v21: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:47:12.717 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:47:12 vm00 bash[20748]: cluster 2026-03-10T13:47:10.653910+0000 mgr.a (mgr.14388) 46 : cluster [DBG] pgmap v22: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:47:12.717 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:47:12 vm00 bash[20748]: cluster 2026-03-10T13:47:10.653910+0000 mgr.a (mgr.14388) 46 : cluster [DBG] pgmap v22: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:47:12.748 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:47:12 vm07 bash[23044]: cluster 2026-03-10T13:47:10.653910+0000 mgr.a (mgr.14388) 46 : cluster [DBG] pgmap v22: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:47:12.748 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:47:12 vm07 bash[23044]: cluster 2026-03-10T13:47:10.653910+0000 mgr.a (mgr.14388) 46 : cluster [DBG] pgmap v22: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:47:12.838 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:47:12 vm08 bash[23387]: cluster 2026-03-10T13:47:10.653910+0000 mgr.a (mgr.14388) 46 : cluster [DBG] pgmap v22: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:47:12.838 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:47:12 vm08 bash[23387]: cluster 2026-03-10T13:47:10.653910+0000 mgr.a (mgr.14388) 46 : cluster [DBG] pgmap v22: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:47:14.717 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:47:14 vm00 bash[20748]: cluster 2026-03-10T13:47:12.654131+0000 mgr.a (mgr.14388) 47 : cluster [DBG] pgmap v23: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:47:14.717 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:47:14 vm00 bash[20748]: cluster 2026-03-10T13:47:12.654131+0000 mgr.a (mgr.14388) 47 : cluster [DBG] pgmap v23: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:47:14.748 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:47:14 vm07 bash[23044]: cluster 2026-03-10T13:47:12.654131+0000 mgr.a (mgr.14388) 47 : cluster [DBG] pgmap v23: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:47:14.748 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:47:14 vm07 bash[23044]: cluster 2026-03-10T13:47:12.654131+0000 mgr.a (mgr.14388) 47 : cluster [DBG] pgmap v23: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:47:14.838 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:47:14 vm08 bash[23387]: cluster 2026-03-10T13:47:12.654131+0000 mgr.a (mgr.14388) 47 : cluster [DBG] pgmap v23: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:47:14.838 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:47:14 vm08 bash[23387]: cluster 2026-03-10T13:47:12.654131+0000 mgr.a (mgr.14388) 47 : cluster [DBG] pgmap v23: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:47:16.717 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:47:16 vm00 bash[20748]: cluster 2026-03-10T13:47:14.654364+0000 mgr.a (mgr.14388) 48 : cluster [DBG] pgmap v24: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:47:16.717 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:47:16 vm00 bash[20748]: cluster 2026-03-10T13:47:14.654364+0000 mgr.a (mgr.14388) 48 : cluster [DBG] pgmap v24: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:47:16.717 INFO:journalctl@ceph.mgr.a.vm00.stdout:Mar 10 13:47:16 vm00 bash[21015]: ::ffff:192.168.123.107 - - [10/Mar/2026:13:47:16] "GET /metrics HTTP/1.1" 200 21325 "" "Prometheus/2.51.0" 2026-03-10T13:47:16.748 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:47:16 vm07 bash[23044]: cluster 2026-03-10T13:47:14.654364+0000 mgr.a (mgr.14388) 48 : cluster [DBG] pgmap v24: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:47:16.748 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:47:16 vm07 bash[23044]: cluster 2026-03-10T13:47:14.654364+0000 mgr.a (mgr.14388) 48 : cluster [DBG] pgmap v24: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:47:16.838 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:47:16 vm08 bash[23387]: cluster 2026-03-10T13:47:14.654364+0000 mgr.a (mgr.14388) 48 : cluster [DBG] pgmap v24: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:47:16.838 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:47:16 vm08 bash[23387]: cluster 2026-03-10T13:47:14.654364+0000 mgr.a (mgr.14388) 48 : cluster [DBG] pgmap v24: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:47:18.717 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:47:18 vm00 bash[20748]: cluster 2026-03-10T13:47:16.654539+0000 mgr.a (mgr.14388) 49 : cluster [DBG] pgmap v25: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:47:18.717 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:47:18 vm00 bash[20748]: cluster 2026-03-10T13:47:16.654539+0000 mgr.a (mgr.14388) 49 : cluster [DBG] pgmap v25: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:47:18.717 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:47:18 vm00 bash[20748]: audit 2026-03-10T13:47:17.693482+0000 mon.a (mon.0) 557 : audit [DBG] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T13:47:18.717 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:47:18 vm00 bash[20748]: audit 2026-03-10T13:47:17.693482+0000 mon.a (mon.0) 557 : audit [DBG] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T13:47:18.748 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:47:18 vm07 bash[23044]: cluster 2026-03-10T13:47:16.654539+0000 mgr.a (mgr.14388) 49 : cluster [DBG] pgmap v25: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:47:18.748 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:47:18 vm07 bash[23044]: cluster 2026-03-10T13:47:16.654539+0000 mgr.a (mgr.14388) 49 : cluster [DBG] pgmap v25: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:47:18.748 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:47:18 vm07 bash[23044]: audit 2026-03-10T13:47:17.693482+0000 mon.a (mon.0) 557 : audit [DBG] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T13:47:18.748 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:47:18 vm07 bash[23044]: audit 2026-03-10T13:47:17.693482+0000 mon.a (mon.0) 557 : audit [DBG] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T13:47:18.838 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:47:18 vm08 bash[23387]: cluster 2026-03-10T13:47:16.654539+0000 mgr.a (mgr.14388) 49 : cluster [DBG] pgmap v25: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:47:18.838 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:47:18 vm08 bash[23387]: cluster 2026-03-10T13:47:16.654539+0000 mgr.a (mgr.14388) 49 : cluster [DBG] pgmap v25: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:47:18.838 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:47:18 vm08 bash[23387]: audit 2026-03-10T13:47:17.693482+0000 mon.a (mon.0) 557 : audit [DBG] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T13:47:18.838 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:47:18 vm08 bash[23387]: audit 2026-03-10T13:47:17.693482+0000 mon.a (mon.0) 557 : audit [DBG] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T13:47:20.717 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:47:20 vm00 bash[20748]: cluster 2026-03-10T13:47:18.654682+0000 mgr.a (mgr.14388) 50 : cluster [DBG] pgmap v26: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:47:20.717 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:47:20 vm00 bash[20748]: cluster 2026-03-10T13:47:18.654682+0000 mgr.a (mgr.14388) 50 : cluster [DBG] pgmap v26: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:47:20.748 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:47:20 vm07 bash[23044]: cluster 2026-03-10T13:47:18.654682+0000 mgr.a (mgr.14388) 50 : cluster [DBG] pgmap v26: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:47:20.748 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:47:20 vm07 bash[23044]: cluster 2026-03-10T13:47:18.654682+0000 mgr.a (mgr.14388) 50 : cluster [DBG] pgmap v26: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:47:20.838 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:47:20 vm08 bash[23387]: cluster 2026-03-10T13:47:18.654682+0000 mgr.a (mgr.14388) 50 : cluster [DBG] pgmap v26: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:47:20.838 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:47:20 vm08 bash[23387]: cluster 2026-03-10T13:47:18.654682+0000 mgr.a (mgr.14388) 50 : cluster [DBG] pgmap v26: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:47:22.717 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:47:22 vm00 bash[20748]: cluster 2026-03-10T13:47:20.654863+0000 mgr.a (mgr.14388) 51 : cluster [DBG] pgmap v27: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:47:22.717 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:47:22 vm00 bash[20748]: cluster 2026-03-10T13:47:20.654863+0000 mgr.a (mgr.14388) 51 : cluster [DBG] pgmap v27: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:47:22.748 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:47:22 vm07 bash[23044]: cluster 2026-03-10T13:47:20.654863+0000 mgr.a (mgr.14388) 51 : cluster [DBG] pgmap v27: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:47:22.748 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:47:22 vm07 bash[23044]: cluster 2026-03-10T13:47:20.654863+0000 mgr.a (mgr.14388) 51 : cluster [DBG] pgmap v27: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:47:22.838 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:47:22 vm08 bash[23387]: cluster 2026-03-10T13:47:20.654863+0000 mgr.a (mgr.14388) 51 : cluster [DBG] pgmap v27: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:47:22.838 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:47:22 vm08 bash[23387]: cluster 2026-03-10T13:47:20.654863+0000 mgr.a (mgr.14388) 51 : cluster [DBG] pgmap v27: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:47:24.717 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:47:24 vm00 bash[20748]: cluster 2026-03-10T13:47:22.655064+0000 mgr.a (mgr.14388) 52 : cluster [DBG] pgmap v28: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:47:24.717 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:47:24 vm00 bash[20748]: cluster 2026-03-10T13:47:22.655064+0000 mgr.a (mgr.14388) 52 : cluster [DBG] pgmap v28: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:47:24.748 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:47:24 vm07 bash[23044]: cluster 2026-03-10T13:47:22.655064+0000 mgr.a (mgr.14388) 52 : cluster [DBG] pgmap v28: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:47:24.748 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:47:24 vm07 bash[23044]: cluster 2026-03-10T13:47:22.655064+0000 mgr.a (mgr.14388) 52 : cluster [DBG] pgmap v28: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:47:24.838 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:47:24 vm08 bash[23387]: cluster 2026-03-10T13:47:22.655064+0000 mgr.a (mgr.14388) 52 : cluster [DBG] pgmap v28: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:47:24.838 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:47:24 vm08 bash[23387]: cluster 2026-03-10T13:47:22.655064+0000 mgr.a (mgr.14388) 52 : cluster [DBG] pgmap v28: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:47:26.717 INFO:journalctl@ceph.mgr.a.vm00.stdout:Mar 10 13:47:26 vm00 bash[21015]: ::ffff:192.168.123.107 - - [10/Mar/2026:13:47:26] "GET /metrics HTTP/1.1" 200 21330 "" "Prometheus/2.51.0" 2026-03-10T13:47:26.717 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:47:26 vm00 bash[20748]: cluster 2026-03-10T13:47:24.655248+0000 mgr.a (mgr.14388) 53 : cluster [DBG] pgmap v29: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:47:26.717 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:47:26 vm00 bash[20748]: cluster 2026-03-10T13:47:24.655248+0000 mgr.a (mgr.14388) 53 : cluster [DBG] pgmap v29: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:47:26.748 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:47:26 vm07 bash[23044]: cluster 2026-03-10T13:47:24.655248+0000 mgr.a (mgr.14388) 53 : cluster [DBG] pgmap v29: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:47:26.748 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:47:26 vm07 bash[23044]: cluster 2026-03-10T13:47:24.655248+0000 mgr.a (mgr.14388) 53 : cluster [DBG] pgmap v29: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:47:26.838 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:47:26 vm08 bash[23387]: cluster 2026-03-10T13:47:24.655248+0000 mgr.a (mgr.14388) 53 : cluster [DBG] pgmap v29: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:47:26.838 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:47:26 vm08 bash[23387]: cluster 2026-03-10T13:47:24.655248+0000 mgr.a (mgr.14388) 53 : cluster [DBG] pgmap v29: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:47:28.717 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:47:28 vm00 bash[20748]: cluster 2026-03-10T13:47:26.655457+0000 mgr.a (mgr.14388) 54 : cluster [DBG] pgmap v30: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:47:28.717 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:47:28 vm00 bash[20748]: cluster 2026-03-10T13:47:26.655457+0000 mgr.a (mgr.14388) 54 : cluster [DBG] pgmap v30: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:47:28.748 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:47:28 vm07 bash[23044]: cluster 2026-03-10T13:47:26.655457+0000 mgr.a (mgr.14388) 54 : cluster [DBG] pgmap v30: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:47:28.748 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:47:28 vm07 bash[23044]: cluster 2026-03-10T13:47:26.655457+0000 mgr.a (mgr.14388) 54 : cluster [DBG] pgmap v30: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:47:28.838 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:47:28 vm08 bash[23387]: cluster 2026-03-10T13:47:26.655457+0000 mgr.a (mgr.14388) 54 : cluster [DBG] pgmap v30: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:47:28.838 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:47:28 vm08 bash[23387]: cluster 2026-03-10T13:47:26.655457+0000 mgr.a (mgr.14388) 54 : cluster [DBG] pgmap v30: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:47:30.717 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:47:30 vm00 bash[20748]: cluster 2026-03-10T13:47:28.655601+0000 mgr.a (mgr.14388) 55 : cluster [DBG] pgmap v31: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:47:30.717 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:47:30 vm00 bash[20748]: cluster 2026-03-10T13:47:28.655601+0000 mgr.a (mgr.14388) 55 : cluster [DBG] pgmap v31: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:47:30.748 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:47:30 vm07 bash[23044]: cluster 2026-03-10T13:47:28.655601+0000 mgr.a (mgr.14388) 55 : cluster [DBG] pgmap v31: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:47:30.748 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:47:30 vm07 bash[23044]: cluster 2026-03-10T13:47:28.655601+0000 mgr.a (mgr.14388) 55 : cluster [DBG] pgmap v31: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:47:30.838 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:47:30 vm08 bash[23387]: cluster 2026-03-10T13:47:28.655601+0000 mgr.a (mgr.14388) 55 : cluster [DBG] pgmap v31: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:47:30.838 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:47:30 vm08 bash[23387]: cluster 2026-03-10T13:47:28.655601+0000 mgr.a (mgr.14388) 55 : cluster [DBG] pgmap v31: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:47:32.717 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:47:32 vm00 bash[20748]: cluster 2026-03-10T13:47:30.655817+0000 mgr.a (mgr.14388) 56 : cluster [DBG] pgmap v32: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:47:32.717 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:47:32 vm00 bash[20748]: cluster 2026-03-10T13:47:30.655817+0000 mgr.a (mgr.14388) 56 : cluster [DBG] pgmap v32: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:47:32.748 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:47:32 vm07 bash[23044]: cluster 2026-03-10T13:47:30.655817+0000 mgr.a (mgr.14388) 56 : cluster [DBG] pgmap v32: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:47:32.748 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:47:32 vm07 bash[23044]: cluster 2026-03-10T13:47:30.655817+0000 mgr.a (mgr.14388) 56 : cluster [DBG] pgmap v32: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:47:32.838 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:47:32 vm08 bash[23387]: cluster 2026-03-10T13:47:30.655817+0000 mgr.a (mgr.14388) 56 : cluster [DBG] pgmap v32: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:47:32.838 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:47:32 vm08 bash[23387]: cluster 2026-03-10T13:47:30.655817+0000 mgr.a (mgr.14388) 56 : cluster [DBG] pgmap v32: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:47:33.717 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:47:33 vm00 bash[20748]: audit 2026-03-10T13:47:32.693722+0000 mon.a (mon.0) 558 : audit [DBG] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T13:47:33.717 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:47:33 vm00 bash[20748]: audit 2026-03-10T13:47:32.693722+0000 mon.a (mon.0) 558 : audit [DBG] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T13:47:33.748 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:47:33 vm07 bash[23044]: audit 2026-03-10T13:47:32.693722+0000 mon.a (mon.0) 558 : audit [DBG] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T13:47:33.748 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:47:33 vm07 bash[23044]: audit 2026-03-10T13:47:32.693722+0000 mon.a (mon.0) 558 : audit [DBG] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T13:47:33.838 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:47:33 vm08 bash[23387]: audit 2026-03-10T13:47:32.693722+0000 mon.a (mon.0) 558 : audit [DBG] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T13:47:33.838 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:47:33 vm08 bash[23387]: audit 2026-03-10T13:47:32.693722+0000 mon.a (mon.0) 558 : audit [DBG] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T13:47:34.717 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:47:34 vm00 bash[20748]: cluster 2026-03-10T13:47:32.656006+0000 mgr.a (mgr.14388) 57 : cluster [DBG] pgmap v33: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:47:34.717 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:47:34 vm00 bash[20748]: cluster 2026-03-10T13:47:32.656006+0000 mgr.a (mgr.14388) 57 : cluster [DBG] pgmap v33: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:47:34.748 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:47:34 vm07 bash[23044]: cluster 2026-03-10T13:47:32.656006+0000 mgr.a (mgr.14388) 57 : cluster [DBG] pgmap v33: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:47:34.748 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:47:34 vm07 bash[23044]: cluster 2026-03-10T13:47:32.656006+0000 mgr.a (mgr.14388) 57 : cluster [DBG] pgmap v33: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:47:34.838 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:47:34 vm08 bash[23387]: cluster 2026-03-10T13:47:32.656006+0000 mgr.a (mgr.14388) 57 : cluster [DBG] pgmap v33: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:47:34.838 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:47:34 vm08 bash[23387]: cluster 2026-03-10T13:47:32.656006+0000 mgr.a (mgr.14388) 57 : cluster [DBG] pgmap v33: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:47:36.717 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:47:36 vm00 bash[20748]: cluster 2026-03-10T13:47:34.656197+0000 mgr.a (mgr.14388) 58 : cluster [DBG] pgmap v34: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:47:36.717 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:47:36 vm00 bash[20748]: cluster 2026-03-10T13:47:34.656197+0000 mgr.a (mgr.14388) 58 : cluster [DBG] pgmap v34: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:47:36.717 INFO:journalctl@ceph.mgr.a.vm00.stdout:Mar 10 13:47:36 vm00 bash[21015]: ::ffff:192.168.123.107 - - [10/Mar/2026:13:47:36] "GET /metrics HTTP/1.1" 200 21329 "" "Prometheus/2.51.0" 2026-03-10T13:47:36.748 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:47:36 vm07 bash[23044]: cluster 2026-03-10T13:47:34.656197+0000 mgr.a (mgr.14388) 58 : cluster [DBG] pgmap v34: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:47:36.748 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:47:36 vm07 bash[23044]: cluster 2026-03-10T13:47:34.656197+0000 mgr.a (mgr.14388) 58 : cluster [DBG] pgmap v34: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:47:36.838 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:47:36 vm08 bash[23387]: cluster 2026-03-10T13:47:34.656197+0000 mgr.a (mgr.14388) 58 : cluster [DBG] pgmap v34: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:47:36.838 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:47:36 vm08 bash[23387]: cluster 2026-03-10T13:47:34.656197+0000 mgr.a (mgr.14388) 58 : cluster [DBG] pgmap v34: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:47:38.717 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:47:38 vm00 bash[20748]: cluster 2026-03-10T13:47:36.656382+0000 mgr.a (mgr.14388) 59 : cluster [DBG] pgmap v35: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:47:38.717 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:47:38 vm00 bash[20748]: cluster 2026-03-10T13:47:36.656382+0000 mgr.a (mgr.14388) 59 : cluster [DBG] pgmap v35: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:47:38.748 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:47:38 vm07 bash[23044]: cluster 2026-03-10T13:47:36.656382+0000 mgr.a (mgr.14388) 59 : cluster [DBG] pgmap v35: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:47:38.748 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:47:38 vm07 bash[23044]: cluster 2026-03-10T13:47:36.656382+0000 mgr.a (mgr.14388) 59 : cluster [DBG] pgmap v35: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:47:38.838 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:47:38 vm08 bash[23387]: cluster 2026-03-10T13:47:36.656382+0000 mgr.a (mgr.14388) 59 : cluster [DBG] pgmap v35: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:47:38.838 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:47:38 vm08 bash[23387]: cluster 2026-03-10T13:47:36.656382+0000 mgr.a (mgr.14388) 59 : cluster [DBG] pgmap v35: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:47:40.717 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:47:40 vm00 bash[20748]: cluster 2026-03-10T13:47:38.656564+0000 mgr.a (mgr.14388) 60 : cluster [DBG] pgmap v36: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:47:40.717 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:47:40 vm00 bash[20748]: cluster 2026-03-10T13:47:38.656564+0000 mgr.a (mgr.14388) 60 : cluster [DBG] pgmap v36: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:47:40.748 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:47:40 vm07 bash[23044]: cluster 2026-03-10T13:47:38.656564+0000 mgr.a (mgr.14388) 60 : cluster [DBG] pgmap v36: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:47:40.748 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:47:40 vm07 bash[23044]: cluster 2026-03-10T13:47:38.656564+0000 mgr.a (mgr.14388) 60 : cluster [DBG] pgmap v36: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:47:40.838 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:47:40 vm08 bash[23387]: cluster 2026-03-10T13:47:38.656564+0000 mgr.a (mgr.14388) 60 : cluster [DBG] pgmap v36: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:47:40.838 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:47:40 vm08 bash[23387]: cluster 2026-03-10T13:47:38.656564+0000 mgr.a (mgr.14388) 60 : cluster [DBG] pgmap v36: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:47:42.717 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:47:42 vm00 bash[20748]: cluster 2026-03-10T13:47:40.656760+0000 mgr.a (mgr.14388) 61 : cluster [DBG] pgmap v37: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:47:42.717 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:47:42 vm00 bash[20748]: cluster 2026-03-10T13:47:40.656760+0000 mgr.a (mgr.14388) 61 : cluster [DBG] pgmap v37: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:47:42.748 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:47:42 vm07 bash[23044]: cluster 2026-03-10T13:47:40.656760+0000 mgr.a (mgr.14388) 61 : cluster [DBG] pgmap v37: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:47:42.748 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:47:42 vm07 bash[23044]: cluster 2026-03-10T13:47:40.656760+0000 mgr.a (mgr.14388) 61 : cluster [DBG] pgmap v37: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:47:42.838 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:47:42 vm08 bash[23387]: cluster 2026-03-10T13:47:40.656760+0000 mgr.a (mgr.14388) 61 : cluster [DBG] pgmap v37: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:47:42.838 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:47:42 vm08 bash[23387]: cluster 2026-03-10T13:47:40.656760+0000 mgr.a (mgr.14388) 61 : cluster [DBG] pgmap v37: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:47:44.748 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:47:44 vm07 bash[23044]: cluster 2026-03-10T13:47:42.656970+0000 mgr.a (mgr.14388) 62 : cluster [DBG] pgmap v38: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:47:44.748 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:47:44 vm07 bash[23044]: cluster 2026-03-10T13:47:42.656970+0000 mgr.a (mgr.14388) 62 : cluster [DBG] pgmap v38: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:47:44.838 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:47:44 vm08 bash[23387]: cluster 2026-03-10T13:47:42.656970+0000 mgr.a (mgr.14388) 62 : cluster [DBG] pgmap v38: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:47:44.838 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:47:44 vm08 bash[23387]: cluster 2026-03-10T13:47:42.656970+0000 mgr.a (mgr.14388) 62 : cluster [DBG] pgmap v38: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:47:44.967 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:47:44 vm00 bash[20748]: cluster 2026-03-10T13:47:42.656970+0000 mgr.a (mgr.14388) 62 : cluster [DBG] pgmap v38: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:47:44.967 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:47:44 vm00 bash[20748]: cluster 2026-03-10T13:47:42.656970+0000 mgr.a (mgr.14388) 62 : cluster [DBG] pgmap v38: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:47:45.748 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:47:45 vm07 bash[23044]: audit 2026-03-10T13:47:45.423892+0000 mon.a (mon.0) 559 : audit [DBG] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T13:47:45.748 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:47:45 vm07 bash[23044]: audit 2026-03-10T13:47:45.423892+0000 mon.a (mon.0) 559 : audit [DBG] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T13:47:45.838 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:47:45 vm08 bash[23387]: audit 2026-03-10T13:47:45.423892+0000 mon.a (mon.0) 559 : audit [DBG] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T13:47:45.838 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:47:45 vm08 bash[23387]: audit 2026-03-10T13:47:45.423892+0000 mon.a (mon.0) 559 : audit [DBG] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T13:47:45.967 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:47:45 vm00 bash[20748]: audit 2026-03-10T13:47:45.423892+0000 mon.a (mon.0) 559 : audit [DBG] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T13:47:45.967 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:47:45 vm00 bash[20748]: audit 2026-03-10T13:47:45.423892+0000 mon.a (mon.0) 559 : audit [DBG] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T13:47:46.748 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:47:46 vm07 bash[23044]: cluster 2026-03-10T13:47:44.657218+0000 mgr.a (mgr.14388) 63 : cluster [DBG] pgmap v39: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:47:46.748 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:47:46 vm07 bash[23044]: cluster 2026-03-10T13:47:44.657218+0000 mgr.a (mgr.14388) 63 : cluster [DBG] pgmap v39: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:47:46.748 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:47:46 vm07 bash[23044]: audit 2026-03-10T13:47:45.742700+0000 mon.a (mon.0) 560 : audit [DBG] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T13:47:46.748 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:47:46 vm07 bash[23044]: audit 2026-03-10T13:47:45.742700+0000 mon.a (mon.0) 560 : audit [DBG] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T13:47:46.748 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:47:46 vm07 bash[23044]: audit 2026-03-10T13:47:45.743256+0000 mon.a (mon.0) 561 : audit [INF] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T13:47:46.748 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:47:46 vm07 bash[23044]: audit 2026-03-10T13:47:45.743256+0000 mon.a (mon.0) 561 : audit [INF] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T13:47:46.748 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:47:46 vm07 bash[23044]: audit 2026-03-10T13:47:45.748406+0000 mon.a (mon.0) 562 : audit [INF] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' 2026-03-10T13:47:46.748 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:47:46 vm07 bash[23044]: audit 2026-03-10T13:47:45.748406+0000 mon.a (mon.0) 562 : audit [INF] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' 2026-03-10T13:47:46.838 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:47:46 vm08 bash[23387]: cluster 2026-03-10T13:47:44.657218+0000 mgr.a (mgr.14388) 63 : cluster [DBG] pgmap v39: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:47:46.838 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:47:46 vm08 bash[23387]: cluster 2026-03-10T13:47:44.657218+0000 mgr.a (mgr.14388) 63 : cluster [DBG] pgmap v39: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:47:46.838 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:47:46 vm08 bash[23387]: audit 2026-03-10T13:47:45.742700+0000 mon.a (mon.0) 560 : audit [DBG] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T13:47:46.838 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:47:46 vm08 bash[23387]: audit 2026-03-10T13:47:45.742700+0000 mon.a (mon.0) 560 : audit [DBG] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T13:47:46.838 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:47:46 vm08 bash[23387]: audit 2026-03-10T13:47:45.743256+0000 mon.a (mon.0) 561 : audit [INF] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T13:47:46.838 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:47:46 vm08 bash[23387]: audit 2026-03-10T13:47:45.743256+0000 mon.a (mon.0) 561 : audit [INF] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T13:47:46.838 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:47:46 vm08 bash[23387]: audit 2026-03-10T13:47:45.748406+0000 mon.a (mon.0) 562 : audit [INF] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' 2026-03-10T13:47:46.838 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:47:46 vm08 bash[23387]: audit 2026-03-10T13:47:45.748406+0000 mon.a (mon.0) 562 : audit [INF] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' 2026-03-10T13:47:46.967 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:47:46 vm00 bash[20748]: cluster 2026-03-10T13:47:44.657218+0000 mgr.a (mgr.14388) 63 : cluster [DBG] pgmap v39: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:47:46.967 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:47:46 vm00 bash[20748]: cluster 2026-03-10T13:47:44.657218+0000 mgr.a (mgr.14388) 63 : cluster [DBG] pgmap v39: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:47:46.967 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:47:46 vm00 bash[20748]: audit 2026-03-10T13:47:45.742700+0000 mon.a (mon.0) 560 : audit [DBG] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T13:47:46.967 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:47:46 vm00 bash[20748]: audit 2026-03-10T13:47:45.742700+0000 mon.a (mon.0) 560 : audit [DBG] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T13:47:46.967 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:47:46 vm00 bash[20748]: audit 2026-03-10T13:47:45.743256+0000 mon.a (mon.0) 561 : audit [INF] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T13:47:46.967 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:47:46 vm00 bash[20748]: audit 2026-03-10T13:47:45.743256+0000 mon.a (mon.0) 561 : audit [INF] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T13:47:46.967 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:47:46 vm00 bash[20748]: audit 2026-03-10T13:47:45.748406+0000 mon.a (mon.0) 562 : audit [INF] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' 2026-03-10T13:47:46.967 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:47:46 vm00 bash[20748]: audit 2026-03-10T13:47:45.748406+0000 mon.a (mon.0) 562 : audit [INF] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' 2026-03-10T13:47:46.967 INFO:journalctl@ceph.mgr.a.vm00.stdout:Mar 10 13:47:46 vm00 bash[21015]: ::ffff:192.168.123.107 - - [10/Mar/2026:13:47:46] "GET /metrics HTTP/1.1" 200 21329 "" "Prometheus/2.51.0" 2026-03-10T13:47:48.748 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:47:48 vm07 bash[23044]: cluster 2026-03-10T13:47:46.657419+0000 mgr.a (mgr.14388) 64 : cluster [DBG] pgmap v40: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:47:48.748 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:47:48 vm07 bash[23044]: cluster 2026-03-10T13:47:46.657419+0000 mgr.a (mgr.14388) 64 : cluster [DBG] pgmap v40: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:47:48.748 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:47:48 vm07 bash[23044]: audit 2026-03-10T13:47:47.693905+0000 mon.a (mon.0) 563 : audit [DBG] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T13:47:48.748 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:47:48 vm07 bash[23044]: audit 2026-03-10T13:47:47.693905+0000 mon.a (mon.0) 563 : audit [DBG] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T13:47:48.838 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:47:48 vm08 bash[23387]: cluster 2026-03-10T13:47:46.657419+0000 mgr.a (mgr.14388) 64 : cluster [DBG] pgmap v40: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:47:48.838 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:47:48 vm08 bash[23387]: cluster 2026-03-10T13:47:46.657419+0000 mgr.a (mgr.14388) 64 : cluster [DBG] pgmap v40: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:47:48.838 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:47:48 vm08 bash[23387]: audit 2026-03-10T13:47:47.693905+0000 mon.a (mon.0) 563 : audit [DBG] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T13:47:48.838 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:47:48 vm08 bash[23387]: audit 2026-03-10T13:47:47.693905+0000 mon.a (mon.0) 563 : audit [DBG] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T13:47:48.967 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:47:48 vm00 bash[20748]: cluster 2026-03-10T13:47:46.657419+0000 mgr.a (mgr.14388) 64 : cluster [DBG] pgmap v40: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:47:48.967 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:47:48 vm00 bash[20748]: cluster 2026-03-10T13:47:46.657419+0000 mgr.a (mgr.14388) 64 : cluster [DBG] pgmap v40: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:47:48.967 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:47:48 vm00 bash[20748]: audit 2026-03-10T13:47:47.693905+0000 mon.a (mon.0) 563 : audit [DBG] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T13:47:48.967 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:47:48 vm00 bash[20748]: audit 2026-03-10T13:47:47.693905+0000 mon.a (mon.0) 563 : audit [DBG] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T13:47:50.748 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:47:50 vm07 bash[23044]: cluster 2026-03-10T13:47:48.657602+0000 mgr.a (mgr.14388) 65 : cluster [DBG] pgmap v41: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:47:50.748 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:47:50 vm07 bash[23044]: cluster 2026-03-10T13:47:48.657602+0000 mgr.a (mgr.14388) 65 : cluster [DBG] pgmap v41: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:47:50.838 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:47:50 vm08 bash[23387]: cluster 2026-03-10T13:47:48.657602+0000 mgr.a (mgr.14388) 65 : cluster [DBG] pgmap v41: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:47:50.838 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:47:50 vm08 bash[23387]: cluster 2026-03-10T13:47:48.657602+0000 mgr.a (mgr.14388) 65 : cluster [DBG] pgmap v41: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:47:50.967 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:47:50 vm00 bash[20748]: cluster 2026-03-10T13:47:48.657602+0000 mgr.a (mgr.14388) 65 : cluster [DBG] pgmap v41: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:47:50.967 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:47:50 vm00 bash[20748]: cluster 2026-03-10T13:47:48.657602+0000 mgr.a (mgr.14388) 65 : cluster [DBG] pgmap v41: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:47:52.748 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:47:52 vm07 bash[23044]: cluster 2026-03-10T13:47:50.657840+0000 mgr.a (mgr.14388) 66 : cluster [DBG] pgmap v42: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:47:52.748 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:47:52 vm07 bash[23044]: cluster 2026-03-10T13:47:50.657840+0000 mgr.a (mgr.14388) 66 : cluster [DBG] pgmap v42: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:47:52.838 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:47:52 vm08 bash[23387]: cluster 2026-03-10T13:47:50.657840+0000 mgr.a (mgr.14388) 66 : cluster [DBG] pgmap v42: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:47:52.838 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:47:52 vm08 bash[23387]: cluster 2026-03-10T13:47:50.657840+0000 mgr.a (mgr.14388) 66 : cluster [DBG] pgmap v42: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:47:52.967 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:47:52 vm00 bash[20748]: cluster 2026-03-10T13:47:50.657840+0000 mgr.a (mgr.14388) 66 : cluster [DBG] pgmap v42: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:47:52.967 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:47:52 vm00 bash[20748]: cluster 2026-03-10T13:47:50.657840+0000 mgr.a (mgr.14388) 66 : cluster [DBG] pgmap v42: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:47:54.838 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:47:54 vm08 bash[23387]: cluster 2026-03-10T13:47:52.658070+0000 mgr.a (mgr.14388) 67 : cluster [DBG] pgmap v43: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:47:54.838 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:47:54 vm08 bash[23387]: cluster 2026-03-10T13:47:52.658070+0000 mgr.a (mgr.14388) 67 : cluster [DBG] pgmap v43: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:47:54.967 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:47:54 vm00 bash[20748]: cluster 2026-03-10T13:47:52.658070+0000 mgr.a (mgr.14388) 67 : cluster [DBG] pgmap v43: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:47:54.967 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:47:54 vm00 bash[20748]: cluster 2026-03-10T13:47:52.658070+0000 mgr.a (mgr.14388) 67 : cluster [DBG] pgmap v43: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:47:54.998 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:47:54 vm07 bash[23044]: cluster 2026-03-10T13:47:52.658070+0000 mgr.a (mgr.14388) 67 : cluster [DBG] pgmap v43: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:47:54.998 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:47:54 vm07 bash[23044]: cluster 2026-03-10T13:47:52.658070+0000 mgr.a (mgr.14388) 67 : cluster [DBG] pgmap v43: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:47:56.838 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:47:56 vm08 bash[23387]: cluster 2026-03-10T13:47:54.658249+0000 mgr.a (mgr.14388) 68 : cluster [DBG] pgmap v44: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:47:56.838 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:47:56 vm08 bash[23387]: cluster 2026-03-10T13:47:54.658249+0000 mgr.a (mgr.14388) 68 : cluster [DBG] pgmap v44: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:47:56.967 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:47:56 vm00 bash[20748]: cluster 2026-03-10T13:47:54.658249+0000 mgr.a (mgr.14388) 68 : cluster [DBG] pgmap v44: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:47:56.967 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:47:56 vm00 bash[20748]: cluster 2026-03-10T13:47:54.658249+0000 mgr.a (mgr.14388) 68 : cluster [DBG] pgmap v44: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:47:56.967 INFO:journalctl@ceph.mgr.a.vm00.stdout:Mar 10 13:47:56 vm00 bash[21015]: ::ffff:192.168.123.107 - - [10/Mar/2026:13:47:56] "GET /metrics HTTP/1.1" 200 21324 "" "Prometheus/2.51.0" 2026-03-10T13:47:56.998 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:47:56 vm07 bash[23044]: cluster 2026-03-10T13:47:54.658249+0000 mgr.a (mgr.14388) 68 : cluster [DBG] pgmap v44: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:47:56.998 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:47:56 vm07 bash[23044]: cluster 2026-03-10T13:47:54.658249+0000 mgr.a (mgr.14388) 68 : cluster [DBG] pgmap v44: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:47:58.838 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:47:58 vm08 bash[23387]: cluster 2026-03-10T13:47:56.658433+0000 mgr.a (mgr.14388) 69 : cluster [DBG] pgmap v45: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:47:58.838 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:47:58 vm08 bash[23387]: cluster 2026-03-10T13:47:56.658433+0000 mgr.a (mgr.14388) 69 : cluster [DBG] pgmap v45: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:47:58.967 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:47:58 vm00 bash[20748]: cluster 2026-03-10T13:47:56.658433+0000 mgr.a (mgr.14388) 69 : cluster [DBG] pgmap v45: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:47:58.967 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:47:58 vm00 bash[20748]: cluster 2026-03-10T13:47:56.658433+0000 mgr.a (mgr.14388) 69 : cluster [DBG] pgmap v45: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:47:58.998 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:47:58 vm07 bash[23044]: cluster 2026-03-10T13:47:56.658433+0000 mgr.a (mgr.14388) 69 : cluster [DBG] pgmap v45: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:47:58.998 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:47:58 vm07 bash[23044]: cluster 2026-03-10T13:47:56.658433+0000 mgr.a (mgr.14388) 69 : cluster [DBG] pgmap v45: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:48:00.838 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:48:00 vm08 bash[23387]: cluster 2026-03-10T13:47:58.658625+0000 mgr.a (mgr.14388) 70 : cluster [DBG] pgmap v46: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:48:00.838 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:48:00 vm08 bash[23387]: cluster 2026-03-10T13:47:58.658625+0000 mgr.a (mgr.14388) 70 : cluster [DBG] pgmap v46: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:48:00.967 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:48:00 vm00 bash[20748]: cluster 2026-03-10T13:47:58.658625+0000 mgr.a (mgr.14388) 70 : cluster [DBG] pgmap v46: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:48:00.967 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:48:00 vm00 bash[20748]: cluster 2026-03-10T13:47:58.658625+0000 mgr.a (mgr.14388) 70 : cluster [DBG] pgmap v46: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:48:00.998 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:48:00 vm07 bash[23044]: cluster 2026-03-10T13:47:58.658625+0000 mgr.a (mgr.14388) 70 : cluster [DBG] pgmap v46: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:48:00.998 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:48:00 vm07 bash[23044]: cluster 2026-03-10T13:47:58.658625+0000 mgr.a (mgr.14388) 70 : cluster [DBG] pgmap v46: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:48:02.838 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:48:02 vm08 bash[23387]: cluster 2026-03-10T13:48:00.658818+0000 mgr.a (mgr.14388) 71 : cluster [DBG] pgmap v47: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:48:02.838 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:48:02 vm08 bash[23387]: cluster 2026-03-10T13:48:00.658818+0000 mgr.a (mgr.14388) 71 : cluster [DBG] pgmap v47: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:48:02.967 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:48:02 vm00 bash[20748]: cluster 2026-03-10T13:48:00.658818+0000 mgr.a (mgr.14388) 71 : cluster [DBG] pgmap v47: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:48:02.967 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:48:02 vm00 bash[20748]: cluster 2026-03-10T13:48:00.658818+0000 mgr.a (mgr.14388) 71 : cluster [DBG] pgmap v47: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:48:02.998 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:48:02 vm07 bash[23044]: cluster 2026-03-10T13:48:00.658818+0000 mgr.a (mgr.14388) 71 : cluster [DBG] pgmap v47: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:48:02.998 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:48:02 vm07 bash[23044]: cluster 2026-03-10T13:48:00.658818+0000 mgr.a (mgr.14388) 71 : cluster [DBG] pgmap v47: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:48:03.838 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:48:03 vm08 bash[23387]: audit 2026-03-10T13:48:02.694219+0000 mon.a (mon.0) 564 : audit [DBG] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T13:48:03.838 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:48:03 vm08 bash[23387]: audit 2026-03-10T13:48:02.694219+0000 mon.a (mon.0) 564 : audit [DBG] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T13:48:03.967 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:48:03 vm00 bash[20748]: audit 2026-03-10T13:48:02.694219+0000 mon.a (mon.0) 564 : audit [DBG] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T13:48:03.967 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:48:03 vm00 bash[20748]: audit 2026-03-10T13:48:02.694219+0000 mon.a (mon.0) 564 : audit [DBG] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T13:48:03.998 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:48:03 vm07 bash[23044]: audit 2026-03-10T13:48:02.694219+0000 mon.a (mon.0) 564 : audit [DBG] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T13:48:03.998 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:48:03 vm07 bash[23044]: audit 2026-03-10T13:48:02.694219+0000 mon.a (mon.0) 564 : audit [DBG] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T13:48:04.838 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:48:04 vm08 bash[23387]: cluster 2026-03-10T13:48:02.659039+0000 mgr.a (mgr.14388) 72 : cluster [DBG] pgmap v48: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:48:04.838 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:48:04 vm08 bash[23387]: cluster 2026-03-10T13:48:02.659039+0000 mgr.a (mgr.14388) 72 : cluster [DBG] pgmap v48: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:48:04.967 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:48:04 vm00 bash[20748]: cluster 2026-03-10T13:48:02.659039+0000 mgr.a (mgr.14388) 72 : cluster [DBG] pgmap v48: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:48:04.967 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:48:04 vm00 bash[20748]: cluster 2026-03-10T13:48:02.659039+0000 mgr.a (mgr.14388) 72 : cluster [DBG] pgmap v48: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:48:04.998 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:48:04 vm07 bash[23044]: cluster 2026-03-10T13:48:02.659039+0000 mgr.a (mgr.14388) 72 : cluster [DBG] pgmap v48: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:48:04.998 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:48:04 vm07 bash[23044]: cluster 2026-03-10T13:48:02.659039+0000 mgr.a (mgr.14388) 72 : cluster [DBG] pgmap v48: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:48:06.838 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:48:06 vm08 bash[23387]: cluster 2026-03-10T13:48:04.659219+0000 mgr.a (mgr.14388) 73 : cluster [DBG] pgmap v49: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:48:06.838 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:48:06 vm08 bash[23387]: cluster 2026-03-10T13:48:04.659219+0000 mgr.a (mgr.14388) 73 : cluster [DBG] pgmap v49: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:48:06.967 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:48:06 vm00 bash[20748]: cluster 2026-03-10T13:48:04.659219+0000 mgr.a (mgr.14388) 73 : cluster [DBG] pgmap v49: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:48:06.967 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:48:06 vm00 bash[20748]: cluster 2026-03-10T13:48:04.659219+0000 mgr.a (mgr.14388) 73 : cluster [DBG] pgmap v49: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:48:06.968 INFO:journalctl@ceph.mgr.a.vm00.stdout:Mar 10 13:48:06 vm00 bash[21015]: ::ffff:192.168.123.107 - - [10/Mar/2026:13:48:06] "GET /metrics HTTP/1.1" 200 21324 "" "Prometheus/2.51.0" 2026-03-10T13:48:06.998 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:48:06 vm07 bash[23044]: cluster 2026-03-10T13:48:04.659219+0000 mgr.a (mgr.14388) 73 : cluster [DBG] pgmap v49: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:48:06.998 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:48:06 vm07 bash[23044]: cluster 2026-03-10T13:48:04.659219+0000 mgr.a (mgr.14388) 73 : cluster [DBG] pgmap v49: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:48:08.838 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:48:08 vm08 bash[23387]: cluster 2026-03-10T13:48:06.659458+0000 mgr.a (mgr.14388) 74 : cluster [DBG] pgmap v50: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:48:08.838 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:48:08 vm08 bash[23387]: cluster 2026-03-10T13:48:06.659458+0000 mgr.a (mgr.14388) 74 : cluster [DBG] pgmap v50: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:48:08.967 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:48:08 vm00 bash[20748]: cluster 2026-03-10T13:48:06.659458+0000 mgr.a (mgr.14388) 74 : cluster [DBG] pgmap v50: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:48:08.967 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:48:08 vm00 bash[20748]: cluster 2026-03-10T13:48:06.659458+0000 mgr.a (mgr.14388) 74 : cluster [DBG] pgmap v50: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:48:08.998 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:48:08 vm07 bash[23044]: cluster 2026-03-10T13:48:06.659458+0000 mgr.a (mgr.14388) 74 : cluster [DBG] pgmap v50: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:48:08.998 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:48:08 vm07 bash[23044]: cluster 2026-03-10T13:48:06.659458+0000 mgr.a (mgr.14388) 74 : cluster [DBG] pgmap v50: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:48:10.838 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:48:10 vm08 bash[23387]: cluster 2026-03-10T13:48:08.659665+0000 mgr.a (mgr.14388) 75 : cluster [DBG] pgmap v51: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:48:10.838 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:48:10 vm08 bash[23387]: cluster 2026-03-10T13:48:08.659665+0000 mgr.a (mgr.14388) 75 : cluster [DBG] pgmap v51: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:48:10.967 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:48:10 vm00 bash[20748]: cluster 2026-03-10T13:48:08.659665+0000 mgr.a (mgr.14388) 75 : cluster [DBG] pgmap v51: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:48:10.967 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:48:10 vm00 bash[20748]: cluster 2026-03-10T13:48:08.659665+0000 mgr.a (mgr.14388) 75 : cluster [DBG] pgmap v51: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:48:10.998 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:48:10 vm07 bash[23044]: cluster 2026-03-10T13:48:08.659665+0000 mgr.a (mgr.14388) 75 : cluster [DBG] pgmap v51: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:48:10.998 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:48:10 vm07 bash[23044]: cluster 2026-03-10T13:48:08.659665+0000 mgr.a (mgr.14388) 75 : cluster [DBG] pgmap v51: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:48:12.838 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:48:12 vm08 bash[23387]: cluster 2026-03-10T13:48:10.659855+0000 mgr.a (mgr.14388) 76 : cluster [DBG] pgmap v52: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:48:12.838 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:48:12 vm08 bash[23387]: cluster 2026-03-10T13:48:10.659855+0000 mgr.a (mgr.14388) 76 : cluster [DBG] pgmap v52: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:48:12.967 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:48:12 vm00 bash[20748]: cluster 2026-03-10T13:48:10.659855+0000 mgr.a (mgr.14388) 76 : cluster [DBG] pgmap v52: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:48:12.967 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:48:12 vm00 bash[20748]: cluster 2026-03-10T13:48:10.659855+0000 mgr.a (mgr.14388) 76 : cluster [DBG] pgmap v52: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:48:12.998 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:48:12 vm07 bash[23044]: cluster 2026-03-10T13:48:10.659855+0000 mgr.a (mgr.14388) 76 : cluster [DBG] pgmap v52: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:48:12.998 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:48:12 vm07 bash[23044]: cluster 2026-03-10T13:48:10.659855+0000 mgr.a (mgr.14388) 76 : cluster [DBG] pgmap v52: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:48:14.838 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:48:14 vm08 bash[23387]: cluster 2026-03-10T13:48:12.660049+0000 mgr.a (mgr.14388) 77 : cluster [DBG] pgmap v53: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:48:14.838 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:48:14 vm08 bash[23387]: cluster 2026-03-10T13:48:12.660049+0000 mgr.a (mgr.14388) 77 : cluster [DBG] pgmap v53: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:48:14.967 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:48:14 vm00 bash[20748]: cluster 2026-03-10T13:48:12.660049+0000 mgr.a (mgr.14388) 77 : cluster [DBG] pgmap v53: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:48:14.967 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:48:14 vm00 bash[20748]: cluster 2026-03-10T13:48:12.660049+0000 mgr.a (mgr.14388) 77 : cluster [DBG] pgmap v53: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:48:14.998 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:48:14 vm07 bash[23044]: cluster 2026-03-10T13:48:12.660049+0000 mgr.a (mgr.14388) 77 : cluster [DBG] pgmap v53: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:48:14.998 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:48:14 vm07 bash[23044]: cluster 2026-03-10T13:48:12.660049+0000 mgr.a (mgr.14388) 77 : cluster [DBG] pgmap v53: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:48:16.967 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:48:16 vm00 bash[20748]: cluster 2026-03-10T13:48:14.660249+0000 mgr.a (mgr.14388) 78 : cluster [DBG] pgmap v54: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:48:16.967 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:48:16 vm00 bash[20748]: cluster 2026-03-10T13:48:14.660249+0000 mgr.a (mgr.14388) 78 : cluster [DBG] pgmap v54: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:48:16.968 INFO:journalctl@ceph.mgr.a.vm00.stdout:Mar 10 13:48:16 vm00 bash[21015]: ::ffff:192.168.123.107 - - [10/Mar/2026:13:48:16] "GET /metrics HTTP/1.1" 200 21324 "" "Prometheus/2.51.0" 2026-03-10T13:48:16.998 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:48:16 vm07 bash[23044]: cluster 2026-03-10T13:48:14.660249+0000 mgr.a (mgr.14388) 78 : cluster [DBG] pgmap v54: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:48:16.998 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:48:16 vm07 bash[23044]: cluster 2026-03-10T13:48:14.660249+0000 mgr.a (mgr.14388) 78 : cluster [DBG] pgmap v54: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:48:17.088 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:48:16 vm08 bash[23387]: cluster 2026-03-10T13:48:14.660249+0000 mgr.a (mgr.14388) 78 : cluster [DBG] pgmap v54: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:48:17.089 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:48:16 vm08 bash[23387]: cluster 2026-03-10T13:48:14.660249+0000 mgr.a (mgr.14388) 78 : cluster [DBG] pgmap v54: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:48:18.967 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:48:18 vm00 bash[20748]: cluster 2026-03-10T13:48:16.660430+0000 mgr.a (mgr.14388) 79 : cluster [DBG] pgmap v55: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:48:18.967 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:48:18 vm00 bash[20748]: cluster 2026-03-10T13:48:16.660430+0000 mgr.a (mgr.14388) 79 : cluster [DBG] pgmap v55: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:48:18.967 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:48:18 vm00 bash[20748]: audit 2026-03-10T13:48:17.694497+0000 mon.a (mon.0) 565 : audit [DBG] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T13:48:18.967 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:48:18 vm00 bash[20748]: audit 2026-03-10T13:48:17.694497+0000 mon.a (mon.0) 565 : audit [DBG] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T13:48:18.998 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:48:18 vm07 bash[23044]: cluster 2026-03-10T13:48:16.660430+0000 mgr.a (mgr.14388) 79 : cluster [DBG] pgmap v55: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:48:18.998 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:48:18 vm07 bash[23044]: cluster 2026-03-10T13:48:16.660430+0000 mgr.a (mgr.14388) 79 : cluster [DBG] pgmap v55: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:48:18.998 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:48:18 vm07 bash[23044]: audit 2026-03-10T13:48:17.694497+0000 mon.a (mon.0) 565 : audit [DBG] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T13:48:18.998 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:48:18 vm07 bash[23044]: audit 2026-03-10T13:48:17.694497+0000 mon.a (mon.0) 565 : audit [DBG] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T13:48:19.088 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:48:18 vm08 bash[23387]: cluster 2026-03-10T13:48:16.660430+0000 mgr.a (mgr.14388) 79 : cluster [DBG] pgmap v55: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:48:19.088 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:48:18 vm08 bash[23387]: cluster 2026-03-10T13:48:16.660430+0000 mgr.a (mgr.14388) 79 : cluster [DBG] pgmap v55: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:48:19.088 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:48:18 vm08 bash[23387]: audit 2026-03-10T13:48:17.694497+0000 mon.a (mon.0) 565 : audit [DBG] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T13:48:19.088 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:48:18 vm08 bash[23387]: audit 2026-03-10T13:48:17.694497+0000 mon.a (mon.0) 565 : audit [DBG] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T13:48:20.967 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:48:20 vm00 bash[20748]: cluster 2026-03-10T13:48:18.660594+0000 mgr.a (mgr.14388) 80 : cluster [DBG] pgmap v56: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:48:20.967 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:48:20 vm00 bash[20748]: cluster 2026-03-10T13:48:18.660594+0000 mgr.a (mgr.14388) 80 : cluster [DBG] pgmap v56: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:48:20.998 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:48:20 vm07 bash[23044]: cluster 2026-03-10T13:48:18.660594+0000 mgr.a (mgr.14388) 80 : cluster [DBG] pgmap v56: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:48:20.998 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:48:20 vm07 bash[23044]: cluster 2026-03-10T13:48:18.660594+0000 mgr.a (mgr.14388) 80 : cluster [DBG] pgmap v56: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:48:21.088 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:48:20 vm08 bash[23387]: cluster 2026-03-10T13:48:18.660594+0000 mgr.a (mgr.14388) 80 : cluster [DBG] pgmap v56: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:48:21.088 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:48:20 vm08 bash[23387]: cluster 2026-03-10T13:48:18.660594+0000 mgr.a (mgr.14388) 80 : cluster [DBG] pgmap v56: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:48:22.967 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:48:22 vm00 bash[20748]: cluster 2026-03-10T13:48:20.660798+0000 mgr.a (mgr.14388) 81 : cluster [DBG] pgmap v57: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:48:22.967 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:48:22 vm00 bash[20748]: cluster 2026-03-10T13:48:20.660798+0000 mgr.a (mgr.14388) 81 : cluster [DBG] pgmap v57: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:48:22.998 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:48:22 vm07 bash[23044]: cluster 2026-03-10T13:48:20.660798+0000 mgr.a (mgr.14388) 81 : cluster [DBG] pgmap v57: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:48:22.998 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:48:22 vm07 bash[23044]: cluster 2026-03-10T13:48:20.660798+0000 mgr.a (mgr.14388) 81 : cluster [DBG] pgmap v57: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:48:23.088 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:48:22 vm08 bash[23387]: cluster 2026-03-10T13:48:20.660798+0000 mgr.a (mgr.14388) 81 : cluster [DBG] pgmap v57: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:48:23.088 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:48:22 vm08 bash[23387]: cluster 2026-03-10T13:48:20.660798+0000 mgr.a (mgr.14388) 81 : cluster [DBG] pgmap v57: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:48:24.967 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:48:24 vm00 bash[20748]: cluster 2026-03-10T13:48:22.661011+0000 mgr.a (mgr.14388) 82 : cluster [DBG] pgmap v58: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:48:24.967 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:48:24 vm00 bash[20748]: cluster 2026-03-10T13:48:22.661011+0000 mgr.a (mgr.14388) 82 : cluster [DBG] pgmap v58: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:48:24.998 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:48:24 vm07 bash[23044]: cluster 2026-03-10T13:48:22.661011+0000 mgr.a (mgr.14388) 82 : cluster [DBG] pgmap v58: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:48:24.998 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:48:24 vm07 bash[23044]: cluster 2026-03-10T13:48:22.661011+0000 mgr.a (mgr.14388) 82 : cluster [DBG] pgmap v58: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:48:25.088 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:48:24 vm08 bash[23387]: cluster 2026-03-10T13:48:22.661011+0000 mgr.a (mgr.14388) 82 : cluster [DBG] pgmap v58: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:48:25.088 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:48:24 vm08 bash[23387]: cluster 2026-03-10T13:48:22.661011+0000 mgr.a (mgr.14388) 82 : cluster [DBG] pgmap v58: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:48:26.967 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:48:26 vm00 bash[20748]: cluster 2026-03-10T13:48:24.661207+0000 mgr.a (mgr.14388) 83 : cluster [DBG] pgmap v59: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:48:26.967 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:48:26 vm00 bash[20748]: cluster 2026-03-10T13:48:24.661207+0000 mgr.a (mgr.14388) 83 : cluster [DBG] pgmap v59: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:48:26.968 INFO:journalctl@ceph.mgr.a.vm00.stdout:Mar 10 13:48:26 vm00 bash[21015]: ::ffff:192.168.123.107 - - [10/Mar/2026:13:48:26] "GET /metrics HTTP/1.1" 200 21325 "" "Prometheus/2.51.0" 2026-03-10T13:48:26.998 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:48:26 vm07 bash[23044]: cluster 2026-03-10T13:48:24.661207+0000 mgr.a (mgr.14388) 83 : cluster [DBG] pgmap v59: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:48:26.998 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:48:26 vm07 bash[23044]: cluster 2026-03-10T13:48:24.661207+0000 mgr.a (mgr.14388) 83 : cluster [DBG] pgmap v59: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:48:27.088 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:48:26 vm08 bash[23387]: cluster 2026-03-10T13:48:24.661207+0000 mgr.a (mgr.14388) 83 : cluster [DBG] pgmap v59: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:48:27.088 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:48:26 vm08 bash[23387]: cluster 2026-03-10T13:48:24.661207+0000 mgr.a (mgr.14388) 83 : cluster [DBG] pgmap v59: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:48:28.967 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:48:28 vm00 bash[20748]: cluster 2026-03-10T13:48:26.661433+0000 mgr.a (mgr.14388) 84 : cluster [DBG] pgmap v60: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:48:28.967 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:48:28 vm00 bash[20748]: cluster 2026-03-10T13:48:26.661433+0000 mgr.a (mgr.14388) 84 : cluster [DBG] pgmap v60: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:48:28.997 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:48:28 vm07 bash[23044]: cluster 2026-03-10T13:48:26.661433+0000 mgr.a (mgr.14388) 84 : cluster [DBG] pgmap v60: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:48:28.998 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:48:28 vm07 bash[23044]: cluster 2026-03-10T13:48:26.661433+0000 mgr.a (mgr.14388) 84 : cluster [DBG] pgmap v60: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:48:29.088 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:48:28 vm08 bash[23387]: cluster 2026-03-10T13:48:26.661433+0000 mgr.a (mgr.14388) 84 : cluster [DBG] pgmap v60: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:48:29.088 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:48:28 vm08 bash[23387]: cluster 2026-03-10T13:48:26.661433+0000 mgr.a (mgr.14388) 84 : cluster [DBG] pgmap v60: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:48:30.967 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:48:30 vm00 bash[20748]: cluster 2026-03-10T13:48:28.661645+0000 mgr.a (mgr.14388) 85 : cluster [DBG] pgmap v61: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:48:30.968 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:48:30 vm00 bash[20748]: cluster 2026-03-10T13:48:28.661645+0000 mgr.a (mgr.14388) 85 : cluster [DBG] pgmap v61: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:48:30.997 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:48:30 vm07 bash[23044]: cluster 2026-03-10T13:48:28.661645+0000 mgr.a (mgr.14388) 85 : cluster [DBG] pgmap v61: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:48:30.998 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:48:30 vm07 bash[23044]: cluster 2026-03-10T13:48:28.661645+0000 mgr.a (mgr.14388) 85 : cluster [DBG] pgmap v61: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:48:31.089 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:48:30 vm08 bash[23387]: cluster 2026-03-10T13:48:28.661645+0000 mgr.a (mgr.14388) 85 : cluster [DBG] pgmap v61: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:48:31.089 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:48:30 vm08 bash[23387]: cluster 2026-03-10T13:48:28.661645+0000 mgr.a (mgr.14388) 85 : cluster [DBG] pgmap v61: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:48:32.967 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:48:32 vm00 bash[20748]: cluster 2026-03-10T13:48:30.661848+0000 mgr.a (mgr.14388) 86 : cluster [DBG] pgmap v62: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:48:32.967 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:48:32 vm00 bash[20748]: cluster 2026-03-10T13:48:30.661848+0000 mgr.a (mgr.14388) 86 : cluster [DBG] pgmap v62: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:48:32.997 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:48:32 vm07 bash[23044]: cluster 2026-03-10T13:48:30.661848+0000 mgr.a (mgr.14388) 86 : cluster [DBG] pgmap v62: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:48:32.998 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:48:32 vm07 bash[23044]: cluster 2026-03-10T13:48:30.661848+0000 mgr.a (mgr.14388) 86 : cluster [DBG] pgmap v62: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:48:33.088 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:48:32 vm08 bash[23387]: cluster 2026-03-10T13:48:30.661848+0000 mgr.a (mgr.14388) 86 : cluster [DBG] pgmap v62: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:48:33.088 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:48:32 vm08 bash[23387]: cluster 2026-03-10T13:48:30.661848+0000 mgr.a (mgr.14388) 86 : cluster [DBG] pgmap v62: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:48:33.967 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:48:33 vm00 bash[20748]: audit 2026-03-10T13:48:32.694458+0000 mon.a (mon.0) 566 : audit [DBG] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T13:48:33.968 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:48:33 vm00 bash[20748]: audit 2026-03-10T13:48:32.694458+0000 mon.a (mon.0) 566 : audit [DBG] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T13:48:33.997 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:48:33 vm07 bash[23044]: audit 2026-03-10T13:48:32.694458+0000 mon.a (mon.0) 566 : audit [DBG] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T13:48:33.998 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:48:33 vm07 bash[23044]: audit 2026-03-10T13:48:32.694458+0000 mon.a (mon.0) 566 : audit [DBG] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T13:48:34.088 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:48:33 vm08 bash[23387]: audit 2026-03-10T13:48:32.694458+0000 mon.a (mon.0) 566 : audit [DBG] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T13:48:34.088 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:48:33 vm08 bash[23387]: audit 2026-03-10T13:48:32.694458+0000 mon.a (mon.0) 566 : audit [DBG] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T13:48:34.967 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:48:34 vm00 bash[20748]: cluster 2026-03-10T13:48:32.662024+0000 mgr.a (mgr.14388) 87 : cluster [DBG] pgmap v63: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:48:34.967 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:48:34 vm00 bash[20748]: cluster 2026-03-10T13:48:32.662024+0000 mgr.a (mgr.14388) 87 : cluster [DBG] pgmap v63: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:48:34.997 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:48:34 vm07 bash[23044]: cluster 2026-03-10T13:48:32.662024+0000 mgr.a (mgr.14388) 87 : cluster [DBG] pgmap v63: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:48:34.998 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:48:34 vm07 bash[23044]: cluster 2026-03-10T13:48:32.662024+0000 mgr.a (mgr.14388) 87 : cluster [DBG] pgmap v63: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:48:35.088 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:48:34 vm08 bash[23387]: cluster 2026-03-10T13:48:32.662024+0000 mgr.a (mgr.14388) 87 : cluster [DBG] pgmap v63: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:48:35.088 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:48:34 vm08 bash[23387]: cluster 2026-03-10T13:48:32.662024+0000 mgr.a (mgr.14388) 87 : cluster [DBG] pgmap v63: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:48:36.967 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:48:36 vm00 bash[20748]: cluster 2026-03-10T13:48:34.662224+0000 mgr.a (mgr.14388) 88 : cluster [DBG] pgmap v64: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:48:36.967 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:48:36 vm00 bash[20748]: cluster 2026-03-10T13:48:34.662224+0000 mgr.a (mgr.14388) 88 : cluster [DBG] pgmap v64: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:48:36.968 INFO:journalctl@ceph.mgr.a.vm00.stdout:Mar 10 13:48:36 vm00 bash[21015]: ::ffff:192.168.123.107 - - [10/Mar/2026:13:48:36] "GET /metrics HTTP/1.1" 200 21319 "" "Prometheus/2.51.0" 2026-03-10T13:48:36.997 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:48:36 vm07 bash[23044]: cluster 2026-03-10T13:48:34.662224+0000 mgr.a (mgr.14388) 88 : cluster [DBG] pgmap v64: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:48:36.997 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:48:36 vm07 bash[23044]: cluster 2026-03-10T13:48:34.662224+0000 mgr.a (mgr.14388) 88 : cluster [DBG] pgmap v64: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:48:37.088 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:48:36 vm08 bash[23387]: cluster 2026-03-10T13:48:34.662224+0000 mgr.a (mgr.14388) 88 : cluster [DBG] pgmap v64: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:48:37.088 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:48:36 vm08 bash[23387]: cluster 2026-03-10T13:48:34.662224+0000 mgr.a (mgr.14388) 88 : cluster [DBG] pgmap v64: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:48:38.967 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:48:38 vm00 bash[20748]: cluster 2026-03-10T13:48:36.662383+0000 mgr.a (mgr.14388) 89 : cluster [DBG] pgmap v65: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:48:38.968 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:48:38 vm00 bash[20748]: cluster 2026-03-10T13:48:36.662383+0000 mgr.a (mgr.14388) 89 : cluster [DBG] pgmap v65: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:48:38.997 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:48:38 vm07 bash[23044]: cluster 2026-03-10T13:48:36.662383+0000 mgr.a (mgr.14388) 89 : cluster [DBG] pgmap v65: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:48:38.998 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:48:38 vm07 bash[23044]: cluster 2026-03-10T13:48:36.662383+0000 mgr.a (mgr.14388) 89 : cluster [DBG] pgmap v65: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:48:39.088 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:48:38 vm08 bash[23387]: cluster 2026-03-10T13:48:36.662383+0000 mgr.a (mgr.14388) 89 : cluster [DBG] pgmap v65: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:48:39.088 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:48:38 vm08 bash[23387]: cluster 2026-03-10T13:48:36.662383+0000 mgr.a (mgr.14388) 89 : cluster [DBG] pgmap v65: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:48:39.998 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:48:39 vm07 bash[23044]: cluster 2026-03-10T13:48:38.662569+0000 mgr.a (mgr.14388) 90 : cluster [DBG] pgmap v66: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:48:39.998 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:48:39 vm07 bash[23044]: cluster 2026-03-10T13:48:38.662569+0000 mgr.a (mgr.14388) 90 : cluster [DBG] pgmap v66: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:48:40.088 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:48:39 vm08 bash[23387]: cluster 2026-03-10T13:48:38.662569+0000 mgr.a (mgr.14388) 90 : cluster [DBG] pgmap v66: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:48:40.088 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:48:39 vm08 bash[23387]: cluster 2026-03-10T13:48:38.662569+0000 mgr.a (mgr.14388) 90 : cluster [DBG] pgmap v66: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:48:40.217 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:48:39 vm00 bash[20748]: cluster 2026-03-10T13:48:38.662569+0000 mgr.a (mgr.14388) 90 : cluster [DBG] pgmap v66: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:48:40.217 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:48:39 vm00 bash[20748]: cluster 2026-03-10T13:48:38.662569+0000 mgr.a (mgr.14388) 90 : cluster [DBG] pgmap v66: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:48:40.998 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:48:40 vm07 bash[23044]: cluster 2026-03-10T13:48:40.662794+0000 mgr.a (mgr.14388) 91 : cluster [DBG] pgmap v67: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:48:40.998 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:48:40 vm07 bash[23044]: cluster 2026-03-10T13:48:40.662794+0000 mgr.a (mgr.14388) 91 : cluster [DBG] pgmap v67: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:48:41.088 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:48:40 vm08 bash[23387]: cluster 2026-03-10T13:48:40.662794+0000 mgr.a (mgr.14388) 91 : cluster [DBG] pgmap v67: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:48:41.088 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:48:40 vm08 bash[23387]: cluster 2026-03-10T13:48:40.662794+0000 mgr.a (mgr.14388) 91 : cluster [DBG] pgmap v67: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:48:41.217 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:48:40 vm00 bash[20748]: cluster 2026-03-10T13:48:40.662794+0000 mgr.a (mgr.14388) 91 : cluster [DBG] pgmap v67: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:48:41.217 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:48:40 vm00 bash[20748]: cluster 2026-03-10T13:48:40.662794+0000 mgr.a (mgr.14388) 91 : cluster [DBG] pgmap v67: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:48:42.998 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:48:42 vm07 bash[23044]: cluster 2026-03-10T13:48:42.663012+0000 mgr.a (mgr.14388) 92 : cluster [DBG] pgmap v68: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:48:42.998 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:48:42 vm07 bash[23044]: cluster 2026-03-10T13:48:42.663012+0000 mgr.a (mgr.14388) 92 : cluster [DBG] pgmap v68: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:48:43.088 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:48:42 vm08 bash[23387]: cluster 2026-03-10T13:48:42.663012+0000 mgr.a (mgr.14388) 92 : cluster [DBG] pgmap v68: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:48:43.088 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:48:42 vm08 bash[23387]: cluster 2026-03-10T13:48:42.663012+0000 mgr.a (mgr.14388) 92 : cluster [DBG] pgmap v68: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:48:43.217 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:48:42 vm00 bash[20748]: cluster 2026-03-10T13:48:42.663012+0000 mgr.a (mgr.14388) 92 : cluster [DBG] pgmap v68: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:48:43.217 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:48:42 vm00 bash[20748]: cluster 2026-03-10T13:48:42.663012+0000 mgr.a (mgr.14388) 92 : cluster [DBG] pgmap v68: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:48:44.998 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:48:44 vm07 bash[23044]: cluster 2026-03-10T13:48:44.663200+0000 mgr.a (mgr.14388) 93 : cluster [DBG] pgmap v69: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:48:44.998 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:48:44 vm07 bash[23044]: cluster 2026-03-10T13:48:44.663200+0000 mgr.a (mgr.14388) 93 : cluster [DBG] pgmap v69: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:48:45.088 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:48:44 vm08 bash[23387]: cluster 2026-03-10T13:48:44.663200+0000 mgr.a (mgr.14388) 93 : cluster [DBG] pgmap v69: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:48:45.088 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:48:44 vm08 bash[23387]: cluster 2026-03-10T13:48:44.663200+0000 mgr.a (mgr.14388) 93 : cluster [DBG] pgmap v69: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:48:45.217 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:48:44 vm00 bash[20748]: cluster 2026-03-10T13:48:44.663200+0000 mgr.a (mgr.14388) 93 : cluster [DBG] pgmap v69: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:48:45.217 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:48:44 vm00 bash[20748]: cluster 2026-03-10T13:48:44.663200+0000 mgr.a (mgr.14388) 93 : cluster [DBG] pgmap v69: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:48:46.217 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:48:45 vm00 bash[20748]: audit 2026-03-10T13:48:45.791082+0000 mon.a (mon.0) 567 : audit [DBG] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T13:48:46.217 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:48:45 vm00 bash[20748]: audit 2026-03-10T13:48:45.791082+0000 mon.a (mon.0) 567 : audit [DBG] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T13:48:46.248 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:48:45 vm07 bash[23044]: audit 2026-03-10T13:48:45.791082+0000 mon.a (mon.0) 567 : audit [DBG] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T13:48:46.248 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:48:45 vm07 bash[23044]: audit 2026-03-10T13:48:45.791082+0000 mon.a (mon.0) 567 : audit [DBG] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T13:48:46.338 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:48:45 vm08 bash[23387]: audit 2026-03-10T13:48:45.791082+0000 mon.a (mon.0) 567 : audit [DBG] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T13:48:46.338 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:48:45 vm08 bash[23387]: audit 2026-03-10T13:48:45.791082+0000 mon.a (mon.0) 567 : audit [DBG] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T13:48:46.967 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:48:46 vm00 bash[20748]: audit 2026-03-10T13:48:46.111590+0000 mon.a (mon.0) 568 : audit [DBG] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T13:48:46.967 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:48:46 vm00 bash[20748]: audit 2026-03-10T13:48:46.111590+0000 mon.a (mon.0) 568 : audit [DBG] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T13:48:46.967 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:48:46 vm00 bash[20748]: audit 2026-03-10T13:48:46.112107+0000 mon.a (mon.0) 569 : audit [INF] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T13:48:46.967 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:48:46 vm00 bash[20748]: audit 2026-03-10T13:48:46.112107+0000 mon.a (mon.0) 569 : audit [INF] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T13:48:46.968 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:48:46 vm00 bash[20748]: audit 2026-03-10T13:48:46.116874+0000 mon.a (mon.0) 570 : audit [INF] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' 2026-03-10T13:48:46.968 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:48:46 vm00 bash[20748]: audit 2026-03-10T13:48:46.116874+0000 mon.a (mon.0) 570 : audit [INF] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' 2026-03-10T13:48:46.968 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:48:46 vm00 bash[20748]: cluster 2026-03-10T13:48:46.663432+0000 mgr.a (mgr.14388) 94 : cluster [DBG] pgmap v70: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:48:46.968 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:48:46 vm00 bash[20748]: cluster 2026-03-10T13:48:46.663432+0000 mgr.a (mgr.14388) 94 : cluster [DBG] pgmap v70: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:48:46.968 INFO:journalctl@ceph.mgr.a.vm00.stdout:Mar 10 13:48:46 vm00 bash[21015]: ::ffff:192.168.123.107 - - [10/Mar/2026:13:48:46] "GET /metrics HTTP/1.1" 200 21319 "" "Prometheus/2.51.0" 2026-03-10T13:48:47.248 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:48:46 vm07 bash[23044]: audit 2026-03-10T13:48:46.111590+0000 mon.a (mon.0) 568 : audit [DBG] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T13:48:47.248 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:48:46 vm07 bash[23044]: audit 2026-03-10T13:48:46.111590+0000 mon.a (mon.0) 568 : audit [DBG] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T13:48:47.248 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:48:46 vm07 bash[23044]: audit 2026-03-10T13:48:46.112107+0000 mon.a (mon.0) 569 : audit [INF] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T13:48:47.248 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:48:46 vm07 bash[23044]: audit 2026-03-10T13:48:46.112107+0000 mon.a (mon.0) 569 : audit [INF] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T13:48:47.248 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:48:46 vm07 bash[23044]: audit 2026-03-10T13:48:46.116874+0000 mon.a (mon.0) 570 : audit [INF] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' 2026-03-10T13:48:47.248 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:48:46 vm07 bash[23044]: audit 2026-03-10T13:48:46.116874+0000 mon.a (mon.0) 570 : audit [INF] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' 2026-03-10T13:48:47.248 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:48:46 vm07 bash[23044]: cluster 2026-03-10T13:48:46.663432+0000 mgr.a (mgr.14388) 94 : cluster [DBG] pgmap v70: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:48:47.248 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:48:46 vm07 bash[23044]: cluster 2026-03-10T13:48:46.663432+0000 mgr.a (mgr.14388) 94 : cluster [DBG] pgmap v70: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:48:47.338 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:48:46 vm08 bash[23387]: audit 2026-03-10T13:48:46.111590+0000 mon.a (mon.0) 568 : audit [DBG] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T13:48:47.338 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:48:46 vm08 bash[23387]: audit 2026-03-10T13:48:46.111590+0000 mon.a (mon.0) 568 : audit [DBG] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T13:48:47.338 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:48:46 vm08 bash[23387]: audit 2026-03-10T13:48:46.112107+0000 mon.a (mon.0) 569 : audit [INF] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T13:48:47.338 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:48:46 vm08 bash[23387]: audit 2026-03-10T13:48:46.112107+0000 mon.a (mon.0) 569 : audit [INF] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T13:48:47.338 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:48:46 vm08 bash[23387]: audit 2026-03-10T13:48:46.116874+0000 mon.a (mon.0) 570 : audit [INF] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' 2026-03-10T13:48:47.338 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:48:46 vm08 bash[23387]: audit 2026-03-10T13:48:46.116874+0000 mon.a (mon.0) 570 : audit [INF] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' 2026-03-10T13:48:47.338 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:48:46 vm08 bash[23387]: cluster 2026-03-10T13:48:46.663432+0000 mgr.a (mgr.14388) 94 : cluster [DBG] pgmap v70: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:48:47.338 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:48:46 vm08 bash[23387]: cluster 2026-03-10T13:48:46.663432+0000 mgr.a (mgr.14388) 94 : cluster [DBG] pgmap v70: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:48:48.467 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:48:48 vm00 bash[20748]: audit 2026-03-10T13:48:47.694813+0000 mon.a (mon.0) 571 : audit [DBG] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T13:48:48.468 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:48:48 vm00 bash[20748]: audit 2026-03-10T13:48:47.694813+0000 mon.a (mon.0) 571 : audit [DBG] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T13:48:48.498 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:48:48 vm07 bash[23044]: audit 2026-03-10T13:48:47.694813+0000 mon.a (mon.0) 571 : audit [DBG] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T13:48:48.498 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:48:48 vm07 bash[23044]: audit 2026-03-10T13:48:47.694813+0000 mon.a (mon.0) 571 : audit [DBG] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T13:48:48.588 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:48:48 vm08 bash[23387]: audit 2026-03-10T13:48:47.694813+0000 mon.a (mon.0) 571 : audit [DBG] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T13:48:48.588 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:48:48 vm08 bash[23387]: audit 2026-03-10T13:48:47.694813+0000 mon.a (mon.0) 571 : audit [DBG] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T13:48:49.467 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:48:49 vm00 bash[20748]: cluster 2026-03-10T13:48:48.663658+0000 mgr.a (mgr.14388) 95 : cluster [DBG] pgmap v71: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:48:49.468 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:48:49 vm00 bash[20748]: cluster 2026-03-10T13:48:48.663658+0000 mgr.a (mgr.14388) 95 : cluster [DBG] pgmap v71: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:48:49.498 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:48:49 vm07 bash[23044]: cluster 2026-03-10T13:48:48.663658+0000 mgr.a (mgr.14388) 95 : cluster [DBG] pgmap v71: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:48:49.498 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:48:49 vm07 bash[23044]: cluster 2026-03-10T13:48:48.663658+0000 mgr.a (mgr.14388) 95 : cluster [DBG] pgmap v71: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:48:49.588 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:48:49 vm08 bash[23387]: cluster 2026-03-10T13:48:48.663658+0000 mgr.a (mgr.14388) 95 : cluster [DBG] pgmap v71: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:48:49.588 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:48:49 vm08 bash[23387]: cluster 2026-03-10T13:48:48.663658+0000 mgr.a (mgr.14388) 95 : cluster [DBG] pgmap v71: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:48:50.997 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:48:50 vm07 bash[23044]: cluster 2026-03-10T13:48:50.663868+0000 mgr.a (mgr.14388) 96 : cluster [DBG] pgmap v72: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:48:50.998 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:48:50 vm07 bash[23044]: cluster 2026-03-10T13:48:50.663868+0000 mgr.a (mgr.14388) 96 : cluster [DBG] pgmap v72: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:48:51.088 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:48:50 vm08 bash[23387]: cluster 2026-03-10T13:48:50.663868+0000 mgr.a (mgr.14388) 96 : cluster [DBG] pgmap v72: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:48:51.088 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:48:50 vm08 bash[23387]: cluster 2026-03-10T13:48:50.663868+0000 mgr.a (mgr.14388) 96 : cluster [DBG] pgmap v72: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:48:51.217 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:48:50 vm00 bash[20748]: cluster 2026-03-10T13:48:50.663868+0000 mgr.a (mgr.14388) 96 : cluster [DBG] pgmap v72: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:48:51.218 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:48:50 vm00 bash[20748]: cluster 2026-03-10T13:48:50.663868+0000 mgr.a (mgr.14388) 96 : cluster [DBG] pgmap v72: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:48:52.998 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:48:52 vm07 bash[23044]: cluster 2026-03-10T13:48:52.664068+0000 mgr.a (mgr.14388) 97 : cluster [DBG] pgmap v73: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:48:52.998 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:48:52 vm07 bash[23044]: cluster 2026-03-10T13:48:52.664068+0000 mgr.a (mgr.14388) 97 : cluster [DBG] pgmap v73: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:48:53.088 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:48:52 vm08 bash[23387]: cluster 2026-03-10T13:48:52.664068+0000 mgr.a (mgr.14388) 97 : cluster [DBG] pgmap v73: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:48:53.088 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:48:52 vm08 bash[23387]: cluster 2026-03-10T13:48:52.664068+0000 mgr.a (mgr.14388) 97 : cluster [DBG] pgmap v73: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:48:53.217 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:48:52 vm00 bash[20748]: cluster 2026-03-10T13:48:52.664068+0000 mgr.a (mgr.14388) 97 : cluster [DBG] pgmap v73: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:48:53.217 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:48:52 vm00 bash[20748]: cluster 2026-03-10T13:48:52.664068+0000 mgr.a (mgr.14388) 97 : cluster [DBG] pgmap v73: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:48:54.997 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:48:54 vm07 bash[23044]: cluster 2026-03-10T13:48:54.664298+0000 mgr.a (mgr.14388) 98 : cluster [DBG] pgmap v74: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:48:54.998 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:48:54 vm07 bash[23044]: cluster 2026-03-10T13:48:54.664298+0000 mgr.a (mgr.14388) 98 : cluster [DBG] pgmap v74: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:48:55.088 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:48:54 vm08 bash[23387]: cluster 2026-03-10T13:48:54.664298+0000 mgr.a (mgr.14388) 98 : cluster [DBG] pgmap v74: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:48:55.088 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:48:54 vm08 bash[23387]: cluster 2026-03-10T13:48:54.664298+0000 mgr.a (mgr.14388) 98 : cluster [DBG] pgmap v74: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:48:55.217 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:48:54 vm00 bash[20748]: cluster 2026-03-10T13:48:54.664298+0000 mgr.a (mgr.14388) 98 : cluster [DBG] pgmap v74: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:48:55.217 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:48:54 vm00 bash[20748]: cluster 2026-03-10T13:48:54.664298+0000 mgr.a (mgr.14388) 98 : cluster [DBG] pgmap v74: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:48:56.967 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:48:56 vm00 bash[20748]: cluster 2026-03-10T13:48:56.664490+0000 mgr.a (mgr.14388) 99 : cluster [DBG] pgmap v75: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:48:56.967 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:48:56 vm00 bash[20748]: cluster 2026-03-10T13:48:56.664490+0000 mgr.a (mgr.14388) 99 : cluster [DBG] pgmap v75: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:48:56.968 INFO:journalctl@ceph.mgr.a.vm00.stdout:Mar 10 13:48:56 vm00 bash[21015]: ::ffff:192.168.123.107 - - [10/Mar/2026:13:48:56] "GET /metrics HTTP/1.1" 200 21337 "" "Prometheus/2.51.0" 2026-03-10T13:48:56.997 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:48:56 vm07 bash[23044]: cluster 2026-03-10T13:48:56.664490+0000 mgr.a (mgr.14388) 99 : cluster [DBG] pgmap v75: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:48:56.997 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:48:56 vm07 bash[23044]: cluster 2026-03-10T13:48:56.664490+0000 mgr.a (mgr.14388) 99 : cluster [DBG] pgmap v75: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:48:57.088 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:48:56 vm08 bash[23387]: cluster 2026-03-10T13:48:56.664490+0000 mgr.a (mgr.14388) 99 : cluster [DBG] pgmap v75: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:48:57.088 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:48:56 vm08 bash[23387]: cluster 2026-03-10T13:48:56.664490+0000 mgr.a (mgr.14388) 99 : cluster [DBG] pgmap v75: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:48:58.997 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:48:58 vm07 bash[23044]: cluster 2026-03-10T13:48:58.664703+0000 mgr.a (mgr.14388) 100 : cluster [DBG] pgmap v76: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:48:58.998 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:48:58 vm07 bash[23044]: cluster 2026-03-10T13:48:58.664703+0000 mgr.a (mgr.14388) 100 : cluster [DBG] pgmap v76: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:48:59.088 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:48:58 vm08 bash[23387]: cluster 2026-03-10T13:48:58.664703+0000 mgr.a (mgr.14388) 100 : cluster [DBG] pgmap v76: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:48:59.088 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:48:58 vm08 bash[23387]: cluster 2026-03-10T13:48:58.664703+0000 mgr.a (mgr.14388) 100 : cluster [DBG] pgmap v76: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:48:59.217 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:48:58 vm00 bash[20748]: cluster 2026-03-10T13:48:58.664703+0000 mgr.a (mgr.14388) 100 : cluster [DBG] pgmap v76: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:48:59.217 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:48:58 vm00 bash[20748]: cluster 2026-03-10T13:48:58.664703+0000 mgr.a (mgr.14388) 100 : cluster [DBG] pgmap v76: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:49:00.997 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:49:00 vm07 bash[23044]: cluster 2026-03-10T13:49:00.664900+0000 mgr.a (mgr.14388) 101 : cluster [DBG] pgmap v77: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:49:00.997 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:49:00 vm07 bash[23044]: cluster 2026-03-10T13:49:00.664900+0000 mgr.a (mgr.14388) 101 : cluster [DBG] pgmap v77: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:49:01.088 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:49:00 vm08 bash[23387]: cluster 2026-03-10T13:49:00.664900+0000 mgr.a (mgr.14388) 101 : cluster [DBG] pgmap v77: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:49:01.088 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:49:00 vm08 bash[23387]: cluster 2026-03-10T13:49:00.664900+0000 mgr.a (mgr.14388) 101 : cluster [DBG] pgmap v77: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:49:01.217 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:49:00 vm00 bash[20748]: cluster 2026-03-10T13:49:00.664900+0000 mgr.a (mgr.14388) 101 : cluster [DBG] pgmap v77: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:49:01.217 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:49:00 vm00 bash[20748]: cluster 2026-03-10T13:49:00.664900+0000 mgr.a (mgr.14388) 101 : cluster [DBG] pgmap v77: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:49:02.997 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:49:02 vm07 bash[23044]: cluster 2026-03-10T13:49:02.665119+0000 mgr.a (mgr.14388) 102 : cluster [DBG] pgmap v78: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:49:02.997 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:49:02 vm07 bash[23044]: cluster 2026-03-10T13:49:02.665119+0000 mgr.a (mgr.14388) 102 : cluster [DBG] pgmap v78: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:49:02.998 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:49:02 vm07 bash[23044]: audit 2026-03-10T13:49:02.694916+0000 mon.a (mon.0) 572 : audit [DBG] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T13:49:02.998 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:49:02 vm07 bash[23044]: audit 2026-03-10T13:49:02.694916+0000 mon.a (mon.0) 572 : audit [DBG] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T13:49:03.088 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:49:02 vm08 bash[23387]: cluster 2026-03-10T13:49:02.665119+0000 mgr.a (mgr.14388) 102 : cluster [DBG] pgmap v78: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:49:03.088 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:49:02 vm08 bash[23387]: cluster 2026-03-10T13:49:02.665119+0000 mgr.a (mgr.14388) 102 : cluster [DBG] pgmap v78: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:49:03.088 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:49:02 vm08 bash[23387]: audit 2026-03-10T13:49:02.694916+0000 mon.a (mon.0) 572 : audit [DBG] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T13:49:03.088 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:49:02 vm08 bash[23387]: audit 2026-03-10T13:49:02.694916+0000 mon.a (mon.0) 572 : audit [DBG] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T13:49:03.217 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:49:02 vm00 bash[20748]: cluster 2026-03-10T13:49:02.665119+0000 mgr.a (mgr.14388) 102 : cluster [DBG] pgmap v78: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:49:03.217 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:49:02 vm00 bash[20748]: cluster 2026-03-10T13:49:02.665119+0000 mgr.a (mgr.14388) 102 : cluster [DBG] pgmap v78: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:49:03.217 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:49:02 vm00 bash[20748]: audit 2026-03-10T13:49:02.694916+0000 mon.a (mon.0) 572 : audit [DBG] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T13:49:03.217 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:49:02 vm00 bash[20748]: audit 2026-03-10T13:49:02.694916+0000 mon.a (mon.0) 572 : audit [DBG] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T13:49:04.997 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:49:04 vm07 bash[23044]: cluster 2026-03-10T13:49:04.665300+0000 mgr.a (mgr.14388) 103 : cluster [DBG] pgmap v79: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:49:04.998 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:49:04 vm07 bash[23044]: cluster 2026-03-10T13:49:04.665300+0000 mgr.a (mgr.14388) 103 : cluster [DBG] pgmap v79: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:49:05.088 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:49:04 vm08 bash[23387]: cluster 2026-03-10T13:49:04.665300+0000 mgr.a (mgr.14388) 103 : cluster [DBG] pgmap v79: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:49:05.088 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:49:04 vm08 bash[23387]: cluster 2026-03-10T13:49:04.665300+0000 mgr.a (mgr.14388) 103 : cluster [DBG] pgmap v79: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:49:05.217 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:49:04 vm00 bash[20748]: cluster 2026-03-10T13:49:04.665300+0000 mgr.a (mgr.14388) 103 : cluster [DBG] pgmap v79: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:49:05.217 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:49:04 vm00 bash[20748]: cluster 2026-03-10T13:49:04.665300+0000 mgr.a (mgr.14388) 103 : cluster [DBG] pgmap v79: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:49:06.967 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:49:06 vm00 bash[20748]: cluster 2026-03-10T13:49:06.665467+0000 mgr.a (mgr.14388) 104 : cluster [DBG] pgmap v80: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:49:06.967 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:49:06 vm00 bash[20748]: cluster 2026-03-10T13:49:06.665467+0000 mgr.a (mgr.14388) 104 : cluster [DBG] pgmap v80: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:49:06.967 INFO:journalctl@ceph.mgr.a.vm00.stdout:Mar 10 13:49:06 vm00 bash[21015]: ::ffff:192.168.123.107 - - [10/Mar/2026:13:49:06] "GET /metrics HTTP/1.1" 200 21336 "" "Prometheus/2.51.0" 2026-03-10T13:49:06.997 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:49:06 vm07 bash[23044]: cluster 2026-03-10T13:49:06.665467+0000 mgr.a (mgr.14388) 104 : cluster [DBG] pgmap v80: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:49:06.997 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:49:06 vm07 bash[23044]: cluster 2026-03-10T13:49:06.665467+0000 mgr.a (mgr.14388) 104 : cluster [DBG] pgmap v80: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:49:07.088 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:49:06 vm08 bash[23387]: cluster 2026-03-10T13:49:06.665467+0000 mgr.a (mgr.14388) 104 : cluster [DBG] pgmap v80: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:49:07.088 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:49:06 vm08 bash[23387]: cluster 2026-03-10T13:49:06.665467+0000 mgr.a (mgr.14388) 104 : cluster [DBG] pgmap v80: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:49:08.997 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:49:08 vm07 bash[23044]: cluster 2026-03-10T13:49:08.665615+0000 mgr.a (mgr.14388) 105 : cluster [DBG] pgmap v81: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:49:08.997 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:49:08 vm07 bash[23044]: cluster 2026-03-10T13:49:08.665615+0000 mgr.a (mgr.14388) 105 : cluster [DBG] pgmap v81: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:49:09.088 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:49:08 vm08 bash[23387]: cluster 2026-03-10T13:49:08.665615+0000 mgr.a (mgr.14388) 105 : cluster [DBG] pgmap v81: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:49:09.088 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:49:08 vm08 bash[23387]: cluster 2026-03-10T13:49:08.665615+0000 mgr.a (mgr.14388) 105 : cluster [DBG] pgmap v81: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:49:09.217 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:49:08 vm00 bash[20748]: cluster 2026-03-10T13:49:08.665615+0000 mgr.a (mgr.14388) 105 : cluster [DBG] pgmap v81: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:49:09.217 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:49:08 vm00 bash[20748]: cluster 2026-03-10T13:49:08.665615+0000 mgr.a (mgr.14388) 105 : cluster [DBG] pgmap v81: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:49:10.997 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:49:10 vm07 bash[23044]: cluster 2026-03-10T13:49:10.665798+0000 mgr.a (mgr.14388) 106 : cluster [DBG] pgmap v82: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:49:10.998 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:49:10 vm07 bash[23044]: cluster 2026-03-10T13:49:10.665798+0000 mgr.a (mgr.14388) 106 : cluster [DBG] pgmap v82: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:49:11.089 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:49:10 vm08 bash[23387]: cluster 2026-03-10T13:49:10.665798+0000 mgr.a (mgr.14388) 106 : cluster [DBG] pgmap v82: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:49:11.089 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:49:10 vm08 bash[23387]: cluster 2026-03-10T13:49:10.665798+0000 mgr.a (mgr.14388) 106 : cluster [DBG] pgmap v82: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:49:11.217 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:49:10 vm00 bash[20748]: cluster 2026-03-10T13:49:10.665798+0000 mgr.a (mgr.14388) 106 : cluster [DBG] pgmap v82: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:49:11.217 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:49:10 vm00 bash[20748]: cluster 2026-03-10T13:49:10.665798+0000 mgr.a (mgr.14388) 106 : cluster [DBG] pgmap v82: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:49:12.997 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:49:12 vm07 bash[23044]: cluster 2026-03-10T13:49:12.665991+0000 mgr.a (mgr.14388) 107 : cluster [DBG] pgmap v83: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:49:12.998 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:49:12 vm07 bash[23044]: cluster 2026-03-10T13:49:12.665991+0000 mgr.a (mgr.14388) 107 : cluster [DBG] pgmap v83: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:49:13.088 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:49:12 vm08 bash[23387]: cluster 2026-03-10T13:49:12.665991+0000 mgr.a (mgr.14388) 107 : cluster [DBG] pgmap v83: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:49:13.089 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:49:12 vm08 bash[23387]: cluster 2026-03-10T13:49:12.665991+0000 mgr.a (mgr.14388) 107 : cluster [DBG] pgmap v83: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:49:13.217 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:49:12 vm00 bash[20748]: cluster 2026-03-10T13:49:12.665991+0000 mgr.a (mgr.14388) 107 : cluster [DBG] pgmap v83: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:49:13.218 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:49:12 vm00 bash[20748]: cluster 2026-03-10T13:49:12.665991+0000 mgr.a (mgr.14388) 107 : cluster [DBG] pgmap v83: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:49:14.998 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:49:14 vm07 bash[23044]: cluster 2026-03-10T13:49:14.666170+0000 mgr.a (mgr.14388) 108 : cluster [DBG] pgmap v84: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:49:14.998 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:49:14 vm07 bash[23044]: cluster 2026-03-10T13:49:14.666170+0000 mgr.a (mgr.14388) 108 : cluster [DBG] pgmap v84: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:49:15.089 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:49:14 vm08 bash[23387]: cluster 2026-03-10T13:49:14.666170+0000 mgr.a (mgr.14388) 108 : cluster [DBG] pgmap v84: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:49:15.089 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:49:14 vm08 bash[23387]: cluster 2026-03-10T13:49:14.666170+0000 mgr.a (mgr.14388) 108 : cluster [DBG] pgmap v84: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:49:15.217 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:49:14 vm00 bash[20748]: cluster 2026-03-10T13:49:14.666170+0000 mgr.a (mgr.14388) 108 : cluster [DBG] pgmap v84: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:49:15.217 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:49:14 vm00 bash[20748]: cluster 2026-03-10T13:49:14.666170+0000 mgr.a (mgr.14388) 108 : cluster [DBG] pgmap v84: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:49:16.967 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:49:16 vm00 bash[20748]: cluster 2026-03-10T13:49:16.666361+0000 mgr.a (mgr.14388) 109 : cluster [DBG] pgmap v85: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:49:16.968 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:49:16 vm00 bash[20748]: cluster 2026-03-10T13:49:16.666361+0000 mgr.a (mgr.14388) 109 : cluster [DBG] pgmap v85: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:49:16.968 INFO:journalctl@ceph.mgr.a.vm00.stdout:Mar 10 13:49:16 vm00 bash[21015]: ::ffff:192.168.123.107 - - [10/Mar/2026:13:49:16] "GET /metrics HTTP/1.1" 200 21336 "" "Prometheus/2.51.0" 2026-03-10T13:49:16.997 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:49:16 vm07 bash[23044]: cluster 2026-03-10T13:49:16.666361+0000 mgr.a (mgr.14388) 109 : cluster [DBG] pgmap v85: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:49:16.997 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:49:16 vm07 bash[23044]: cluster 2026-03-10T13:49:16.666361+0000 mgr.a (mgr.14388) 109 : cluster [DBG] pgmap v85: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:49:17.089 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:49:16 vm08 bash[23387]: cluster 2026-03-10T13:49:16.666361+0000 mgr.a (mgr.14388) 109 : cluster [DBG] pgmap v85: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:49:17.089 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:49:16 vm08 bash[23387]: cluster 2026-03-10T13:49:16.666361+0000 mgr.a (mgr.14388) 109 : cluster [DBG] pgmap v85: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:49:17.997 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:49:17 vm07 bash[23044]: audit 2026-03-10T13:49:17.695232+0000 mon.a (mon.0) 573 : audit [DBG] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T13:49:17.998 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:49:17 vm07 bash[23044]: audit 2026-03-10T13:49:17.695232+0000 mon.a (mon.0) 573 : audit [DBG] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T13:49:18.089 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:49:17 vm08 bash[23387]: audit 2026-03-10T13:49:17.695232+0000 mon.a (mon.0) 573 : audit [DBG] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T13:49:18.089 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:49:17 vm08 bash[23387]: audit 2026-03-10T13:49:17.695232+0000 mon.a (mon.0) 573 : audit [DBG] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T13:49:18.217 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:49:17 vm00 bash[20748]: audit 2026-03-10T13:49:17.695232+0000 mon.a (mon.0) 573 : audit [DBG] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T13:49:18.217 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:49:17 vm00 bash[20748]: audit 2026-03-10T13:49:17.695232+0000 mon.a (mon.0) 573 : audit [DBG] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T13:49:18.997 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:49:18 vm07 bash[23044]: cluster 2026-03-10T13:49:18.666528+0000 mgr.a (mgr.14388) 110 : cluster [DBG] pgmap v86: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:49:18.998 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:49:18 vm07 bash[23044]: cluster 2026-03-10T13:49:18.666528+0000 mgr.a (mgr.14388) 110 : cluster [DBG] pgmap v86: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:49:19.088 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:49:18 vm08 bash[23387]: cluster 2026-03-10T13:49:18.666528+0000 mgr.a (mgr.14388) 110 : cluster [DBG] pgmap v86: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:49:19.088 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:49:18 vm08 bash[23387]: cluster 2026-03-10T13:49:18.666528+0000 mgr.a (mgr.14388) 110 : cluster [DBG] pgmap v86: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:49:19.217 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:49:18 vm00 bash[20748]: cluster 2026-03-10T13:49:18.666528+0000 mgr.a (mgr.14388) 110 : cluster [DBG] pgmap v86: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:49:19.218 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:49:18 vm00 bash[20748]: cluster 2026-03-10T13:49:18.666528+0000 mgr.a (mgr.14388) 110 : cluster [DBG] pgmap v86: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:49:20.997 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:49:20 vm07 bash[23044]: cluster 2026-03-10T13:49:20.666819+0000 mgr.a (mgr.14388) 111 : cluster [DBG] pgmap v87: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:49:20.998 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:49:20 vm07 bash[23044]: cluster 2026-03-10T13:49:20.666819+0000 mgr.a (mgr.14388) 111 : cluster [DBG] pgmap v87: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:49:21.088 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:49:20 vm08 bash[23387]: cluster 2026-03-10T13:49:20.666819+0000 mgr.a (mgr.14388) 111 : cluster [DBG] pgmap v87: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:49:21.088 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:49:20 vm08 bash[23387]: cluster 2026-03-10T13:49:20.666819+0000 mgr.a (mgr.14388) 111 : cluster [DBG] pgmap v87: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:49:21.217 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:49:20 vm00 bash[20748]: cluster 2026-03-10T13:49:20.666819+0000 mgr.a (mgr.14388) 111 : cluster [DBG] pgmap v87: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:49:21.218 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:49:20 vm00 bash[20748]: cluster 2026-03-10T13:49:20.666819+0000 mgr.a (mgr.14388) 111 : cluster [DBG] pgmap v87: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:49:22.997 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:49:22 vm07 bash[23044]: cluster 2026-03-10T13:49:22.667061+0000 mgr.a (mgr.14388) 112 : cluster [DBG] pgmap v88: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:49:22.998 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:49:22 vm07 bash[23044]: cluster 2026-03-10T13:49:22.667061+0000 mgr.a (mgr.14388) 112 : cluster [DBG] pgmap v88: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:49:23.088 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:49:22 vm08 bash[23387]: cluster 2026-03-10T13:49:22.667061+0000 mgr.a (mgr.14388) 112 : cluster [DBG] pgmap v88: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:49:23.088 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:49:22 vm08 bash[23387]: cluster 2026-03-10T13:49:22.667061+0000 mgr.a (mgr.14388) 112 : cluster [DBG] pgmap v88: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:49:23.217 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:49:22 vm00 bash[20748]: cluster 2026-03-10T13:49:22.667061+0000 mgr.a (mgr.14388) 112 : cluster [DBG] pgmap v88: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:49:23.217 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:49:22 vm00 bash[20748]: cluster 2026-03-10T13:49:22.667061+0000 mgr.a (mgr.14388) 112 : cluster [DBG] pgmap v88: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:49:24.997 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:49:24 vm07 bash[23044]: cluster 2026-03-10T13:49:24.667340+0000 mgr.a (mgr.14388) 113 : cluster [DBG] pgmap v89: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:49:24.998 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:49:24 vm07 bash[23044]: cluster 2026-03-10T13:49:24.667340+0000 mgr.a (mgr.14388) 113 : cluster [DBG] pgmap v89: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:49:25.088 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:49:24 vm08 bash[23387]: cluster 2026-03-10T13:49:24.667340+0000 mgr.a (mgr.14388) 113 : cluster [DBG] pgmap v89: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:49:25.089 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:49:24 vm08 bash[23387]: cluster 2026-03-10T13:49:24.667340+0000 mgr.a (mgr.14388) 113 : cluster [DBG] pgmap v89: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:49:25.217 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:49:24 vm00 bash[20748]: cluster 2026-03-10T13:49:24.667340+0000 mgr.a (mgr.14388) 113 : cluster [DBG] pgmap v89: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:49:25.218 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:49:24 vm00 bash[20748]: cluster 2026-03-10T13:49:24.667340+0000 mgr.a (mgr.14388) 113 : cluster [DBG] pgmap v89: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:49:26.967 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:49:26 vm00 bash[20748]: cluster 2026-03-10T13:49:26.667598+0000 mgr.a (mgr.14388) 114 : cluster [DBG] pgmap v90: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:49:26.967 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:49:26 vm00 bash[20748]: cluster 2026-03-10T13:49:26.667598+0000 mgr.a (mgr.14388) 114 : cluster [DBG] pgmap v90: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:49:26.968 INFO:journalctl@ceph.mgr.a.vm00.stdout:Mar 10 13:49:26 vm00 bash[21015]: ::ffff:192.168.123.107 - - [10/Mar/2026:13:49:26] "GET /metrics HTTP/1.1" 200 21334 "" "Prometheus/2.51.0" 2026-03-10T13:49:26.997 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:49:26 vm07 bash[23044]: cluster 2026-03-10T13:49:26.667598+0000 mgr.a (mgr.14388) 114 : cluster [DBG] pgmap v90: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:49:26.997 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:49:26 vm07 bash[23044]: cluster 2026-03-10T13:49:26.667598+0000 mgr.a (mgr.14388) 114 : cluster [DBG] pgmap v90: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:49:27.088 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:49:26 vm08 bash[23387]: cluster 2026-03-10T13:49:26.667598+0000 mgr.a (mgr.14388) 114 : cluster [DBG] pgmap v90: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:49:27.089 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:49:26 vm08 bash[23387]: cluster 2026-03-10T13:49:26.667598+0000 mgr.a (mgr.14388) 114 : cluster [DBG] pgmap v90: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:49:28.998 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:49:28 vm07 bash[23044]: cluster 2026-03-10T13:49:28.667817+0000 mgr.a (mgr.14388) 115 : cluster [DBG] pgmap v91: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:49:28.998 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:49:28 vm07 bash[23044]: cluster 2026-03-10T13:49:28.667817+0000 mgr.a (mgr.14388) 115 : cluster [DBG] pgmap v91: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:49:29.088 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:49:28 vm08 bash[23387]: cluster 2026-03-10T13:49:28.667817+0000 mgr.a (mgr.14388) 115 : cluster [DBG] pgmap v91: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:49:29.089 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:49:28 vm08 bash[23387]: cluster 2026-03-10T13:49:28.667817+0000 mgr.a (mgr.14388) 115 : cluster [DBG] pgmap v91: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:49:29.217 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:49:28 vm00 bash[20748]: cluster 2026-03-10T13:49:28.667817+0000 mgr.a (mgr.14388) 115 : cluster [DBG] pgmap v91: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:49:29.217 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:49:28 vm00 bash[20748]: cluster 2026-03-10T13:49:28.667817+0000 mgr.a (mgr.14388) 115 : cluster [DBG] pgmap v91: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:49:30.997 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:49:30 vm07 bash[23044]: cluster 2026-03-10T13:49:30.668031+0000 mgr.a (mgr.14388) 116 : cluster [DBG] pgmap v92: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:49:30.997 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:49:30 vm07 bash[23044]: cluster 2026-03-10T13:49:30.668031+0000 mgr.a (mgr.14388) 116 : cluster [DBG] pgmap v92: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:49:31.088 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:49:30 vm08 bash[23387]: cluster 2026-03-10T13:49:30.668031+0000 mgr.a (mgr.14388) 116 : cluster [DBG] pgmap v92: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:49:31.089 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:49:30 vm08 bash[23387]: cluster 2026-03-10T13:49:30.668031+0000 mgr.a (mgr.14388) 116 : cluster [DBG] pgmap v92: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:49:31.217 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:49:30 vm00 bash[20748]: cluster 2026-03-10T13:49:30.668031+0000 mgr.a (mgr.14388) 116 : cluster [DBG] pgmap v92: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:49:31.217 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:49:30 vm00 bash[20748]: cluster 2026-03-10T13:49:30.668031+0000 mgr.a (mgr.14388) 116 : cluster [DBG] pgmap v92: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:49:32.997 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:49:32 vm07 bash[23044]: cluster 2026-03-10T13:49:32.668204+0000 mgr.a (mgr.14388) 117 : cluster [DBG] pgmap v93: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:49:32.998 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:49:32 vm07 bash[23044]: cluster 2026-03-10T13:49:32.668204+0000 mgr.a (mgr.14388) 117 : cluster [DBG] pgmap v93: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:49:32.998 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:49:32 vm07 bash[23044]: audit 2026-03-10T13:49:32.695172+0000 mon.a (mon.0) 574 : audit [DBG] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T13:49:32.998 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:49:32 vm07 bash[23044]: audit 2026-03-10T13:49:32.695172+0000 mon.a (mon.0) 574 : audit [DBG] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T13:49:33.089 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:49:32 vm08 bash[23387]: cluster 2026-03-10T13:49:32.668204+0000 mgr.a (mgr.14388) 117 : cluster [DBG] pgmap v93: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:49:33.089 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:49:32 vm08 bash[23387]: cluster 2026-03-10T13:49:32.668204+0000 mgr.a (mgr.14388) 117 : cluster [DBG] pgmap v93: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:49:33.089 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:49:32 vm08 bash[23387]: audit 2026-03-10T13:49:32.695172+0000 mon.a (mon.0) 574 : audit [DBG] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T13:49:33.089 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:49:32 vm08 bash[23387]: audit 2026-03-10T13:49:32.695172+0000 mon.a (mon.0) 574 : audit [DBG] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T13:49:33.217 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:49:32 vm00 bash[20748]: cluster 2026-03-10T13:49:32.668204+0000 mgr.a (mgr.14388) 117 : cluster [DBG] pgmap v93: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:49:33.217 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:49:32 vm00 bash[20748]: cluster 2026-03-10T13:49:32.668204+0000 mgr.a (mgr.14388) 117 : cluster [DBG] pgmap v93: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:49:33.217 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:49:32 vm00 bash[20748]: audit 2026-03-10T13:49:32.695172+0000 mon.a (mon.0) 574 : audit [DBG] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T13:49:33.217 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:49:32 vm00 bash[20748]: audit 2026-03-10T13:49:32.695172+0000 mon.a (mon.0) 574 : audit [DBG] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T13:49:34.997 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:49:34 vm07 bash[23044]: cluster 2026-03-10T13:49:34.668443+0000 mgr.a (mgr.14388) 118 : cluster [DBG] pgmap v94: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:49:34.998 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:49:34 vm07 bash[23044]: cluster 2026-03-10T13:49:34.668443+0000 mgr.a (mgr.14388) 118 : cluster [DBG] pgmap v94: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:49:35.089 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:49:34 vm08 bash[23387]: cluster 2026-03-10T13:49:34.668443+0000 mgr.a (mgr.14388) 118 : cluster [DBG] pgmap v94: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:49:35.089 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:49:34 vm08 bash[23387]: cluster 2026-03-10T13:49:34.668443+0000 mgr.a (mgr.14388) 118 : cluster [DBG] pgmap v94: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:49:35.217 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:49:34 vm00 bash[20748]: cluster 2026-03-10T13:49:34.668443+0000 mgr.a (mgr.14388) 118 : cluster [DBG] pgmap v94: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:49:35.217 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:49:34 vm00 bash[20748]: cluster 2026-03-10T13:49:34.668443+0000 mgr.a (mgr.14388) 118 : cluster [DBG] pgmap v94: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:49:36.967 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:49:36 vm00 bash[20748]: cluster 2026-03-10T13:49:36.668699+0000 mgr.a (mgr.14388) 119 : cluster [DBG] pgmap v95: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:49:36.968 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:49:36 vm00 bash[20748]: cluster 2026-03-10T13:49:36.668699+0000 mgr.a (mgr.14388) 119 : cluster [DBG] pgmap v95: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:49:36.968 INFO:journalctl@ceph.mgr.a.vm00.stdout:Mar 10 13:49:36 vm00 bash[21015]: ::ffff:192.168.123.107 - - [10/Mar/2026:13:49:36] "GET /metrics HTTP/1.1" 200 21338 "" "Prometheus/2.51.0" 2026-03-10T13:49:36.997 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:49:36 vm07 bash[23044]: cluster 2026-03-10T13:49:36.668699+0000 mgr.a (mgr.14388) 119 : cluster [DBG] pgmap v95: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:49:36.997 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:49:36 vm07 bash[23044]: cluster 2026-03-10T13:49:36.668699+0000 mgr.a (mgr.14388) 119 : cluster [DBG] pgmap v95: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:49:37.088 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:49:36 vm08 bash[23387]: cluster 2026-03-10T13:49:36.668699+0000 mgr.a (mgr.14388) 119 : cluster [DBG] pgmap v95: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:49:37.088 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:49:36 vm08 bash[23387]: cluster 2026-03-10T13:49:36.668699+0000 mgr.a (mgr.14388) 119 : cluster [DBG] pgmap v95: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:49:38.997 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:49:38 vm07 bash[23044]: cluster 2026-03-10T13:49:38.668896+0000 mgr.a (mgr.14388) 120 : cluster [DBG] pgmap v96: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:49:38.998 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:49:38 vm07 bash[23044]: cluster 2026-03-10T13:49:38.668896+0000 mgr.a (mgr.14388) 120 : cluster [DBG] pgmap v96: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:49:39.088 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:49:38 vm08 bash[23387]: cluster 2026-03-10T13:49:38.668896+0000 mgr.a (mgr.14388) 120 : cluster [DBG] pgmap v96: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:49:39.088 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:49:38 vm08 bash[23387]: cluster 2026-03-10T13:49:38.668896+0000 mgr.a (mgr.14388) 120 : cluster [DBG] pgmap v96: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:49:39.217 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:49:38 vm00 bash[20748]: cluster 2026-03-10T13:49:38.668896+0000 mgr.a (mgr.14388) 120 : cluster [DBG] pgmap v96: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:49:39.217 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:49:38 vm00 bash[20748]: cluster 2026-03-10T13:49:38.668896+0000 mgr.a (mgr.14388) 120 : cluster [DBG] pgmap v96: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:49:40.997 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:49:40 vm07 bash[23044]: cluster 2026-03-10T13:49:40.669170+0000 mgr.a (mgr.14388) 121 : cluster [DBG] pgmap v97: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:49:40.998 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:49:40 vm07 bash[23044]: cluster 2026-03-10T13:49:40.669170+0000 mgr.a (mgr.14388) 121 : cluster [DBG] pgmap v97: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:49:41.088 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:49:40 vm08 bash[23387]: cluster 2026-03-10T13:49:40.669170+0000 mgr.a (mgr.14388) 121 : cluster [DBG] pgmap v97: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:49:41.088 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:49:40 vm08 bash[23387]: cluster 2026-03-10T13:49:40.669170+0000 mgr.a (mgr.14388) 121 : cluster [DBG] pgmap v97: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:49:41.217 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:49:40 vm00 bash[20748]: cluster 2026-03-10T13:49:40.669170+0000 mgr.a (mgr.14388) 121 : cluster [DBG] pgmap v97: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:49:41.217 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:49:40 vm00 bash[20748]: cluster 2026-03-10T13:49:40.669170+0000 mgr.a (mgr.14388) 121 : cluster [DBG] pgmap v97: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:49:42.998 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:49:42 vm07 bash[23044]: cluster 2026-03-10T13:49:42.669464+0000 mgr.a (mgr.14388) 122 : cluster [DBG] pgmap v98: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:49:42.998 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:49:42 vm07 bash[23044]: cluster 2026-03-10T13:49:42.669464+0000 mgr.a (mgr.14388) 122 : cluster [DBG] pgmap v98: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:49:43.089 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:49:42 vm08 bash[23387]: cluster 2026-03-10T13:49:42.669464+0000 mgr.a (mgr.14388) 122 : cluster [DBG] pgmap v98: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:49:43.089 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:49:42 vm08 bash[23387]: cluster 2026-03-10T13:49:42.669464+0000 mgr.a (mgr.14388) 122 : cluster [DBG] pgmap v98: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:49:43.217 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:49:42 vm00 bash[20748]: cluster 2026-03-10T13:49:42.669464+0000 mgr.a (mgr.14388) 122 : cluster [DBG] pgmap v98: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:49:43.217 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:49:42 vm00 bash[20748]: cluster 2026-03-10T13:49:42.669464+0000 mgr.a (mgr.14388) 122 : cluster [DBG] pgmap v98: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:49:44.997 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:49:44 vm07 bash[23044]: cluster 2026-03-10T13:49:44.669704+0000 mgr.a (mgr.14388) 123 : cluster [DBG] pgmap v99: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:49:44.998 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:49:44 vm07 bash[23044]: cluster 2026-03-10T13:49:44.669704+0000 mgr.a (mgr.14388) 123 : cluster [DBG] pgmap v99: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:49:45.088 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:49:44 vm08 bash[23387]: cluster 2026-03-10T13:49:44.669704+0000 mgr.a (mgr.14388) 123 : cluster [DBG] pgmap v99: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:49:45.088 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:49:44 vm08 bash[23387]: cluster 2026-03-10T13:49:44.669704+0000 mgr.a (mgr.14388) 123 : cluster [DBG] pgmap v99: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:49:45.217 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:49:44 vm00 bash[20748]: cluster 2026-03-10T13:49:44.669704+0000 mgr.a (mgr.14388) 123 : cluster [DBG] pgmap v99: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:49:45.217 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:49:44 vm00 bash[20748]: cluster 2026-03-10T13:49:44.669704+0000 mgr.a (mgr.14388) 123 : cluster [DBG] pgmap v99: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:49:46.468 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:49:46 vm00 bash[20748]: audit 2026-03-10T13:49:46.158587+0000 mon.a (mon.0) 575 : audit [DBG] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T13:49:46.469 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:49:46 vm00 bash[20748]: audit 2026-03-10T13:49:46.158587+0000 mon.a (mon.0) 575 : audit [DBG] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T13:49:46.497 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:49:46 vm07 bash[23044]: audit 2026-03-10T13:49:46.158587+0000 mon.a (mon.0) 575 : audit [DBG] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T13:49:46.497 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:49:46 vm07 bash[23044]: audit 2026-03-10T13:49:46.158587+0000 mon.a (mon.0) 575 : audit [DBG] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T13:49:46.589 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:49:46 vm08 bash[23387]: audit 2026-03-10T13:49:46.158587+0000 mon.a (mon.0) 575 : audit [DBG] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T13:49:46.589 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:49:46 vm08 bash[23387]: audit 2026-03-10T13:49:46.158587+0000 mon.a (mon.0) 575 : audit [DBG] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T13:49:46.967 INFO:journalctl@ceph.mgr.a.vm00.stdout:Mar 10 13:49:46 vm00 bash[21015]: ::ffff:192.168.123.107 - - [10/Mar/2026:13:49:46] "GET /metrics HTTP/1.1" 200 21338 "" "Prometheus/2.51.0" 2026-03-10T13:49:47.467 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:49:47 vm00 bash[20748]: audit 2026-03-10T13:49:46.494993+0000 mon.a (mon.0) 576 : audit [DBG] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T13:49:47.468 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:49:47 vm00 bash[20748]: audit 2026-03-10T13:49:46.494993+0000 mon.a (mon.0) 576 : audit [DBG] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T13:49:47.468 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:49:47 vm00 bash[20748]: audit 2026-03-10T13:49:46.495499+0000 mon.a (mon.0) 577 : audit [INF] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T13:49:47.468 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:49:47 vm00 bash[20748]: audit 2026-03-10T13:49:46.495499+0000 mon.a (mon.0) 577 : audit [INF] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T13:49:47.468 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:49:47 vm00 bash[20748]: audit 2026-03-10T13:49:46.500573+0000 mon.a (mon.0) 578 : audit [INF] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' 2026-03-10T13:49:47.468 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:49:47 vm00 bash[20748]: audit 2026-03-10T13:49:46.500573+0000 mon.a (mon.0) 578 : audit [INF] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' 2026-03-10T13:49:47.468 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:49:47 vm00 bash[20748]: cluster 2026-03-10T13:49:46.669917+0000 mgr.a (mgr.14388) 124 : cluster [DBG] pgmap v100: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:49:47.468 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:49:47 vm00 bash[20748]: cluster 2026-03-10T13:49:46.669917+0000 mgr.a (mgr.14388) 124 : cluster [DBG] pgmap v100: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:49:47.498 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:49:47 vm07 bash[23044]: audit 2026-03-10T13:49:46.494993+0000 mon.a (mon.0) 576 : audit [DBG] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T13:49:47.498 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:49:47 vm07 bash[23044]: audit 2026-03-10T13:49:46.494993+0000 mon.a (mon.0) 576 : audit [DBG] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T13:49:47.498 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:49:47 vm07 bash[23044]: audit 2026-03-10T13:49:46.495499+0000 mon.a (mon.0) 577 : audit [INF] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T13:49:47.498 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:49:47 vm07 bash[23044]: audit 2026-03-10T13:49:46.495499+0000 mon.a (mon.0) 577 : audit [INF] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T13:49:47.498 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:49:47 vm07 bash[23044]: audit 2026-03-10T13:49:46.500573+0000 mon.a (mon.0) 578 : audit [INF] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' 2026-03-10T13:49:47.498 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:49:47 vm07 bash[23044]: audit 2026-03-10T13:49:46.500573+0000 mon.a (mon.0) 578 : audit [INF] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' 2026-03-10T13:49:47.498 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:49:47 vm07 bash[23044]: cluster 2026-03-10T13:49:46.669917+0000 mgr.a (mgr.14388) 124 : cluster [DBG] pgmap v100: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:49:47.498 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:49:47 vm07 bash[23044]: cluster 2026-03-10T13:49:46.669917+0000 mgr.a (mgr.14388) 124 : cluster [DBG] pgmap v100: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:49:47.589 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:49:47 vm08 bash[23387]: audit 2026-03-10T13:49:46.494993+0000 mon.a (mon.0) 576 : audit [DBG] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T13:49:47.589 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:49:47 vm08 bash[23387]: audit 2026-03-10T13:49:46.494993+0000 mon.a (mon.0) 576 : audit [DBG] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T13:49:47.589 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:49:47 vm08 bash[23387]: audit 2026-03-10T13:49:46.495499+0000 mon.a (mon.0) 577 : audit [INF] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T13:49:47.589 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:49:47 vm08 bash[23387]: audit 2026-03-10T13:49:46.495499+0000 mon.a (mon.0) 577 : audit [INF] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T13:49:47.589 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:49:47 vm08 bash[23387]: audit 2026-03-10T13:49:46.500573+0000 mon.a (mon.0) 578 : audit [INF] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' 2026-03-10T13:49:47.589 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:49:47 vm08 bash[23387]: audit 2026-03-10T13:49:46.500573+0000 mon.a (mon.0) 578 : audit [INF] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' 2026-03-10T13:49:47.589 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:49:47 vm08 bash[23387]: cluster 2026-03-10T13:49:46.669917+0000 mgr.a (mgr.14388) 124 : cluster [DBG] pgmap v100: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:49:47.589 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:49:47 vm08 bash[23387]: cluster 2026-03-10T13:49:46.669917+0000 mgr.a (mgr.14388) 124 : cluster [DBG] pgmap v100: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:49:48.839 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:49:48 vm08 bash[23387]: audit 2026-03-10T13:49:47.695553+0000 mon.a (mon.0) 579 : audit [DBG] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T13:49:48.839 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:49:48 vm08 bash[23387]: audit 2026-03-10T13:49:47.695553+0000 mon.a (mon.0) 579 : audit [DBG] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T13:49:48.967 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:49:48 vm00 bash[20748]: audit 2026-03-10T13:49:47.695553+0000 mon.a (mon.0) 579 : audit [DBG] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T13:49:48.968 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:49:48 vm00 bash[20748]: audit 2026-03-10T13:49:47.695553+0000 mon.a (mon.0) 579 : audit [DBG] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T13:49:48.997 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:49:48 vm07 bash[23044]: audit 2026-03-10T13:49:47.695553+0000 mon.a (mon.0) 579 : audit [DBG] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T13:49:48.997 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:49:48 vm07 bash[23044]: audit 2026-03-10T13:49:47.695553+0000 mon.a (mon.0) 579 : audit [DBG] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T13:49:49.838 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:49:49 vm08 bash[23387]: cluster 2026-03-10T13:49:48.670121+0000 mgr.a (mgr.14388) 125 : cluster [DBG] pgmap v101: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:49:49.839 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:49:49 vm08 bash[23387]: cluster 2026-03-10T13:49:48.670121+0000 mgr.a (mgr.14388) 125 : cluster [DBG] pgmap v101: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:49:49.967 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:49:49 vm00 bash[20748]: cluster 2026-03-10T13:49:48.670121+0000 mgr.a (mgr.14388) 125 : cluster [DBG] pgmap v101: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:49:49.968 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:49:49 vm00 bash[20748]: cluster 2026-03-10T13:49:48.670121+0000 mgr.a (mgr.14388) 125 : cluster [DBG] pgmap v101: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:49:49.997 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:49:49 vm07 bash[23044]: cluster 2026-03-10T13:49:48.670121+0000 mgr.a (mgr.14388) 125 : cluster [DBG] pgmap v101: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:49:49.997 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:49:49 vm07 bash[23044]: cluster 2026-03-10T13:49:48.670121+0000 mgr.a (mgr.14388) 125 : cluster [DBG] pgmap v101: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:49:50.997 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:49:50 vm07 bash[23044]: cluster 2026-03-10T13:49:50.670403+0000 mgr.a (mgr.14388) 126 : cluster [DBG] pgmap v102: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:49:50.997 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:49:50 vm07 bash[23044]: cluster 2026-03-10T13:49:50.670403+0000 mgr.a (mgr.14388) 126 : cluster [DBG] pgmap v102: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:49:51.088 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:49:50 vm08 bash[23387]: cluster 2026-03-10T13:49:50.670403+0000 mgr.a (mgr.14388) 126 : cluster [DBG] pgmap v102: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:49:51.088 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:49:50 vm08 bash[23387]: cluster 2026-03-10T13:49:50.670403+0000 mgr.a (mgr.14388) 126 : cluster [DBG] pgmap v102: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:49:51.217 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:49:50 vm00 bash[20748]: cluster 2026-03-10T13:49:50.670403+0000 mgr.a (mgr.14388) 126 : cluster [DBG] pgmap v102: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:49:51.217 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:49:50 vm00 bash[20748]: cluster 2026-03-10T13:49:50.670403+0000 mgr.a (mgr.14388) 126 : cluster [DBG] pgmap v102: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:49:52.997 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:49:52 vm07 bash[23044]: cluster 2026-03-10T13:49:52.670618+0000 mgr.a (mgr.14388) 127 : cluster [DBG] pgmap v103: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:49:52.998 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:49:52 vm07 bash[23044]: cluster 2026-03-10T13:49:52.670618+0000 mgr.a (mgr.14388) 127 : cluster [DBG] pgmap v103: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:49:53.088 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:49:52 vm08 bash[23387]: cluster 2026-03-10T13:49:52.670618+0000 mgr.a (mgr.14388) 127 : cluster [DBG] pgmap v103: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:49:53.089 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:49:52 vm08 bash[23387]: cluster 2026-03-10T13:49:52.670618+0000 mgr.a (mgr.14388) 127 : cluster [DBG] pgmap v103: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:49:53.217 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:49:52 vm00 bash[20748]: cluster 2026-03-10T13:49:52.670618+0000 mgr.a (mgr.14388) 127 : cluster [DBG] pgmap v103: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:49:53.217 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:49:52 vm00 bash[20748]: cluster 2026-03-10T13:49:52.670618+0000 mgr.a (mgr.14388) 127 : cluster [DBG] pgmap v103: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:49:54.997 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:49:54 vm07 bash[23044]: cluster 2026-03-10T13:49:54.670928+0000 mgr.a (mgr.14388) 128 : cluster [DBG] pgmap v104: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:49:54.998 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:49:54 vm07 bash[23044]: cluster 2026-03-10T13:49:54.670928+0000 mgr.a (mgr.14388) 128 : cluster [DBG] pgmap v104: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:49:55.088 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:49:54 vm08 bash[23387]: cluster 2026-03-10T13:49:54.670928+0000 mgr.a (mgr.14388) 128 : cluster [DBG] pgmap v104: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:49:55.089 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:49:54 vm08 bash[23387]: cluster 2026-03-10T13:49:54.670928+0000 mgr.a (mgr.14388) 128 : cluster [DBG] pgmap v104: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:49:55.217 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:49:54 vm00 bash[20748]: cluster 2026-03-10T13:49:54.670928+0000 mgr.a (mgr.14388) 128 : cluster [DBG] pgmap v104: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:49:55.218 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:49:54 vm00 bash[20748]: cluster 2026-03-10T13:49:54.670928+0000 mgr.a (mgr.14388) 128 : cluster [DBG] pgmap v104: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:49:56.967 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:49:56 vm00 bash[20748]: cluster 2026-03-10T13:49:56.671185+0000 mgr.a (mgr.14388) 129 : cluster [DBG] pgmap v105: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:49:56.968 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:49:56 vm00 bash[20748]: cluster 2026-03-10T13:49:56.671185+0000 mgr.a (mgr.14388) 129 : cluster [DBG] pgmap v105: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:49:56.968 INFO:journalctl@ceph.mgr.a.vm00.stdout:Mar 10 13:49:56 vm00 bash[21015]: ::ffff:192.168.123.107 - - [10/Mar/2026:13:49:56] "GET /metrics HTTP/1.1" 200 21334 "" "Prometheus/2.51.0" 2026-03-10T13:49:56.997 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:49:56 vm07 bash[23044]: cluster 2026-03-10T13:49:56.671185+0000 mgr.a (mgr.14388) 129 : cluster [DBG] pgmap v105: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:49:56.997 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:49:56 vm07 bash[23044]: cluster 2026-03-10T13:49:56.671185+0000 mgr.a (mgr.14388) 129 : cluster [DBG] pgmap v105: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:49:57.088 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:49:56 vm08 bash[23387]: cluster 2026-03-10T13:49:56.671185+0000 mgr.a (mgr.14388) 129 : cluster [DBG] pgmap v105: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:49:57.088 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:49:56 vm08 bash[23387]: cluster 2026-03-10T13:49:56.671185+0000 mgr.a (mgr.14388) 129 : cluster [DBG] pgmap v105: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:49:58.998 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:49:58 vm07 bash[23044]: cluster 2026-03-10T13:49:58.671366+0000 mgr.a (mgr.14388) 130 : cluster [DBG] pgmap v106: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:49:58.998 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:49:58 vm07 bash[23044]: cluster 2026-03-10T13:49:58.671366+0000 mgr.a (mgr.14388) 130 : cluster [DBG] pgmap v106: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:49:59.088 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:49:58 vm08 bash[23387]: cluster 2026-03-10T13:49:58.671366+0000 mgr.a (mgr.14388) 130 : cluster [DBG] pgmap v106: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:49:59.089 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:49:58 vm08 bash[23387]: cluster 2026-03-10T13:49:58.671366+0000 mgr.a (mgr.14388) 130 : cluster [DBG] pgmap v106: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:49:59.217 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:49:58 vm00 bash[20748]: cluster 2026-03-10T13:49:58.671366+0000 mgr.a (mgr.14388) 130 : cluster [DBG] pgmap v106: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:49:59.217 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:49:58 vm00 bash[20748]: cluster 2026-03-10T13:49:58.671366+0000 mgr.a (mgr.14388) 130 : cluster [DBG] pgmap v106: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:50:00.302 INFO:teuthology.orchestra.run.vm00.stderr:+ ceph orch ls 2026-03-10T13:50:00.338 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:50:00 vm08 bash[23387]: cluster 2026-03-10T13:50:00.000113+0000 mon.a (mon.0) 580 : cluster [INF] overall HEALTH_OK 2026-03-10T13:50:00.339 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:50:00 vm08 bash[23387]: cluster 2026-03-10T13:50:00.000113+0000 mon.a (mon.0) 580 : cluster [INF] overall HEALTH_OK 2026-03-10T13:50:00.452 INFO:teuthology.orchestra.run.vm00.stdout:NAME PORTS RUNNING REFRESHED AGE PLACEMENT 2026-03-10T13:50:00.452 INFO:teuthology.orchestra.run.vm00.stdout:alertmanager ?:9093,9094 1/1 3m ago 4m count:1 2026-03-10T13:50:00.452 INFO:teuthology.orchestra.run.vm00.stdout:grafana ?:3000 1/1 3m ago 4m count:1 2026-03-10T13:50:00.452 INFO:teuthology.orchestra.run.vm00.stdout:mgr 2/2 3m ago 6m vm00=a;vm07=b;count:2 2026-03-10T13:50:00.452 INFO:teuthology.orchestra.run.vm00.stdout:mon 3/3 3m ago 6m vm00:192.168.123.100=a;vm07:192.168.123.107=b;vm08:192.168.123.108=c;count:3 2026-03-10T13:50:00.452 INFO:teuthology.orchestra.run.vm00.stdout:node-exporter ?:9100 3/3 3m ago 4m * 2026-03-10T13:50:00.452 INFO:teuthology.orchestra.run.vm00.stdout:osd 3 3m ago - 2026-03-10T13:50:00.452 INFO:teuthology.orchestra.run.vm00.stdout:prometheus ?:9095 1/1 3m ago 4m count:1 2026-03-10T13:50:00.462 INFO:teuthology.orchestra.run.vm00.stderr:+ ceph orch ps 2026-03-10T13:50:00.467 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:50:00 vm00 bash[20748]: cluster 2026-03-10T13:50:00.000113+0000 mon.a (mon.0) 580 : cluster [INF] overall HEALTH_OK 2026-03-10T13:50:00.467 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:50:00 vm00 bash[20748]: cluster 2026-03-10T13:50:00.000113+0000 mon.a (mon.0) 580 : cluster [INF] overall HEALTH_OK 2026-03-10T13:50:00.497 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:50:00 vm07 bash[23044]: cluster 2026-03-10T13:50:00.000113+0000 mon.a (mon.0) 580 : cluster [INF] overall HEALTH_OK 2026-03-10T13:50:00.497 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:50:00 vm07 bash[23044]: cluster 2026-03-10T13:50:00.000113+0000 mon.a (mon.0) 580 : cluster [INF] overall HEALTH_OK 2026-03-10T13:50:00.619 INFO:teuthology.orchestra.run.vm00.stdout:NAME HOST PORTS STATUS REFRESHED AGE MEM USE MEM LIM VERSION IMAGE ID CONTAINER ID 2026-03-10T13:50:00.620 INFO:teuthology.orchestra.run.vm00.stdout:alertmanager.vm08 vm08 *:9093,9094 running (3m) 3m ago 3m 14.1M - 0.25.0 c8568f914cd2 86f36501d4c5 2026-03-10T13:50:00.620 INFO:teuthology.orchestra.run.vm00.stdout:grafana.vm00 vm00 *:3000 running (3m) 3m ago 3m 49.2M - 10.4.0 c8b91775d855 35a3217ae245 2026-03-10T13:50:00.620 INFO:teuthology.orchestra.run.vm00.stdout:mgr.a vm00 *:9283,8765 running (7m) 3m ago 7m 521M - 19.2.3-678-ge911bdeb 654f31e6858e 36aef4229b8f 2026-03-10T13:50:00.620 INFO:teuthology.orchestra.run.vm00.stdout:mgr.b vm07 *:8443,8765 running (6m) 3m ago 6m 463M - 19.2.3-678-ge911bdeb 654f31e6858e 1bc84bb5cda3 2026-03-10T13:50:00.620 INFO:teuthology.orchestra.run.vm00.stdout:mon.a vm00 running (7m) 3m ago 7m 45.7M 2048M 19.2.3-678-ge911bdeb 654f31e6858e b07593e83085 2026-03-10T13:50:00.620 INFO:teuthology.orchestra.run.vm00.stdout:mon.b vm07 running (6m) 3m ago 6m 40.3M 2048M 19.2.3-678-ge911bdeb 654f31e6858e 65bbb19c9410 2026-03-10T13:50:00.620 INFO:teuthology.orchestra.run.vm00.stdout:mon.c vm08 running (6m) 3m ago 6m 41.0M 2048M 19.2.3-678-ge911bdeb 654f31e6858e c47f4ece119d 2026-03-10T13:50:00.620 INFO:teuthology.orchestra.run.vm00.stdout:node-exporter.vm00 vm00 *:9100 running (3m) 3m ago 3m 5787k - 1.7.0 72c9c2088986 fe5c8dd4872d 2026-03-10T13:50:00.620 INFO:teuthology.orchestra.run.vm00.stdout:node-exporter.vm07 vm07 *:9100 running (3m) 3m ago 3m 2723k - 1.7.0 72c9c2088986 e421bb850c05 2026-03-10T13:50:00.620 INFO:teuthology.orchestra.run.vm00.stdout:node-exporter.vm08 vm08 *:9100 running (3m) 3m ago 3m 5764k - 1.7.0 72c9c2088986 af38bca9059e 2026-03-10T13:50:00.620 INFO:teuthology.orchestra.run.vm00.stdout:osd.0 vm00 running (6m) 3m ago 6m 36.2M 4096M 19.2.3-678-ge911bdeb 654f31e6858e 6fa86be10d0f 2026-03-10T13:50:00.620 INFO:teuthology.orchestra.run.vm00.stdout:osd.1 vm07 running (5m) 3m ago 5m 57.6M 4096M 19.2.3-678-ge911bdeb 654f31e6858e 87ed7b30ed68 2026-03-10T13:50:00.620 INFO:teuthology.orchestra.run.vm00.stdout:osd.2 vm08 running (4m) 3m ago 4m 34.5M 2503M 19.2.3-678-ge911bdeb 654f31e6858e 6f1860491c69 2026-03-10T13:50:00.620 INFO:teuthology.orchestra.run.vm00.stdout:prometheus.vm07 vm07 *:9095 running (3m) 3m ago 3m 23.7M - 2.51.0 1d3b7f56885b 1b67f5664909 2026-03-10T13:50:00.630 INFO:teuthology.orchestra.run.vm00.stderr:+ ceph orch host ls 2026-03-10T13:50:00.780 INFO:teuthology.orchestra.run.vm00.stdout:HOST ADDR LABELS STATUS 2026-03-10T13:50:00.780 INFO:teuthology.orchestra.run.vm00.stdout:vm00 192.168.123.100 2026-03-10T13:50:00.780 INFO:teuthology.orchestra.run.vm00.stdout:vm07 192.168.123.107 2026-03-10T13:50:00.780 INFO:teuthology.orchestra.run.vm00.stdout:vm08 192.168.123.108 2026-03-10T13:50:00.780 INFO:teuthology.orchestra.run.vm00.stdout:3 hosts in cluster 2026-03-10T13:50:00.790 INFO:teuthology.orchestra.run.vm00.stderr:++ ceph orch ps --daemon-type mon -f json 2026-03-10T13:50:00.790 INFO:teuthology.orchestra.run.vm00.stderr:++ jq -r 'last | .daemon_name' 2026-03-10T13:50:00.950 INFO:teuthology.orchestra.run.vm00.stderr:+ MON_DAEMON=mon.c 2026-03-10T13:50:00.950 INFO:teuthology.orchestra.run.vm00.stderr:++ ceph orch ps --daemon-type grafana -f json 2026-03-10T13:50:00.950 INFO:teuthology.orchestra.run.vm00.stderr:++ jq -r .hostname 2026-03-10T13:50:00.952 INFO:teuthology.orchestra.run.vm00.stderr:++ jq -e '.[]' 2026-03-10T13:50:01.109 INFO:teuthology.orchestra.run.vm00.stderr:+ GRAFANA_HOST=vm00 2026-03-10T13:50:01.109 INFO:teuthology.orchestra.run.vm00.stderr:++ ceph orch ps --daemon-type prometheus -f json 2026-03-10T13:50:01.109 INFO:teuthology.orchestra.run.vm00.stderr:++ jq -r .hostname 2026-03-10T13:50:01.111 INFO:teuthology.orchestra.run.vm00.stderr:++ jq -e '.[]' 2026-03-10T13:50:01.271 INFO:teuthology.orchestra.run.vm00.stderr:+ PROM_HOST=vm07 2026-03-10T13:50:01.271 INFO:teuthology.orchestra.run.vm00.stderr:++ ceph orch ps --daemon-type alertmanager -f json 2026-03-10T13:50:01.271 INFO:teuthology.orchestra.run.vm00.stderr:++ jq -r .hostname 2026-03-10T13:50:01.273 INFO:teuthology.orchestra.run.vm00.stderr:++ jq -e '.[]' 2026-03-10T13:50:01.339 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:50:01 vm08 bash[23387]: audit 2026-03-10T13:50:00.450999+0000 mgr.a (mgr.14388) 131 : audit [DBG] from='client.24304 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T13:50:01.339 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:50:01 vm08 bash[23387]: audit 2026-03-10T13:50:00.450999+0000 mgr.a (mgr.14388) 131 : audit [DBG] from='client.24304 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T13:50:01.339 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:50:01 vm08 bash[23387]: audit 2026-03-10T13:50:00.616267+0000 mgr.a (mgr.14388) 132 : audit [DBG] from='client.14430 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T13:50:01.339 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:50:01 vm08 bash[23387]: audit 2026-03-10T13:50:00.616267+0000 mgr.a (mgr.14388) 132 : audit [DBG] from='client.14430 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T13:50:01.428 INFO:teuthology.orchestra.run.vm00.stderr:+ ALERTM_HOST=vm08 2026-03-10T13:50:01.428 INFO:teuthology.orchestra.run.vm00.stderr:++ ceph orch host ls -f json 2026-03-10T13:50:01.428 INFO:teuthology.orchestra.run.vm00.stderr:++ jq -r --arg GRAFANA_HOST vm00 '.[] | select(.hostname==$GRAFANA_HOST) | .addr' 2026-03-10T13:50:01.467 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:50:01 vm00 bash[20748]: audit 2026-03-10T13:50:00.450999+0000 mgr.a (mgr.14388) 131 : audit [DBG] from='client.24304 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T13:50:01.467 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:50:01 vm00 bash[20748]: audit 2026-03-10T13:50:00.450999+0000 mgr.a (mgr.14388) 131 : audit [DBG] from='client.24304 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T13:50:01.467 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:50:01 vm00 bash[20748]: audit 2026-03-10T13:50:00.616267+0000 mgr.a (mgr.14388) 132 : audit [DBG] from='client.14430 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T13:50:01.467 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:50:01 vm00 bash[20748]: audit 2026-03-10T13:50:00.616267+0000 mgr.a (mgr.14388) 132 : audit [DBG] from='client.14430 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T13:50:01.497 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:50:01 vm07 bash[23044]: audit 2026-03-10T13:50:00.450999+0000 mgr.a (mgr.14388) 131 : audit [DBG] from='client.24304 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T13:50:01.497 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:50:01 vm07 bash[23044]: audit 2026-03-10T13:50:00.450999+0000 mgr.a (mgr.14388) 131 : audit [DBG] from='client.24304 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T13:50:01.497 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:50:01 vm07 bash[23044]: audit 2026-03-10T13:50:00.616267+0000 mgr.a (mgr.14388) 132 : audit [DBG] from='client.14430 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T13:50:01.497 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:50:01 vm07 bash[23044]: audit 2026-03-10T13:50:00.616267+0000 mgr.a (mgr.14388) 132 : audit [DBG] from='client.14430 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T13:50:01.583 INFO:teuthology.orchestra.run.vm00.stderr:+ GRAFANA_IP=192.168.123.100 2026-03-10T13:50:01.584 INFO:teuthology.orchestra.run.vm00.stderr:++ ceph orch host ls -f json 2026-03-10T13:50:01.584 INFO:teuthology.orchestra.run.vm00.stderr:++ jq -r --arg PROM_HOST vm07 '.[] | select(.hostname==$PROM_HOST) | .addr' 2026-03-10T13:50:01.736 INFO:teuthology.orchestra.run.vm00.stderr:+ PROM_IP=192.168.123.107 2026-03-10T13:50:01.736 INFO:teuthology.orchestra.run.vm00.stderr:++ ceph orch host ls -f json 2026-03-10T13:50:01.736 INFO:teuthology.orchestra.run.vm00.stderr:++ jq -r --arg ALERTM_HOST vm08 '.[] | select(.hostname==$ALERTM_HOST) | .addr' 2026-03-10T13:50:01.891 INFO:teuthology.orchestra.run.vm00.stderr:+ ALERTM_IP=192.168.123.108 2026-03-10T13:50:01.891 INFO:teuthology.orchestra.run.vm00.stderr:++ ceph orch host ls -f json 2026-03-10T13:50:01.891 INFO:teuthology.orchestra.run.vm00.stderr:++ jq -r '.[] | .addr' 2026-03-10T13:50:02.044 INFO:teuthology.orchestra.run.vm00.stderr:+ ALL_HOST_IPS='192.168.123.100 2026-03-10T13:50:02.044 INFO:teuthology.orchestra.run.vm00.stderr:192.168.123.107 2026-03-10T13:50:02.044 INFO:teuthology.orchestra.run.vm00.stderr:192.168.123.108' 2026-03-10T13:50:02.044 INFO:teuthology.orchestra.run.vm00.stderr:+ for ip in $ALL_HOST_IPS 2026-03-10T13:50:02.044 INFO:teuthology.orchestra.run.vm00.stderr:+ curl -s http://192.168.123.100:9100/metric 2026-03-10T13:50:02.047 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-10T13:50:02.047 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-10T13:50:02.047 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-10T13:50:02.047 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-10T13:50:02.047 INFO:teuthology.orchestra.run.vm00.stdout: Node Exporter 2026-03-10T13:50:02.047 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-10T13:50:02.048 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-10T13:50:02.048 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-10T13:50:02.048 INFO:teuthology.orchestra.run.vm00.stdout:
2026-03-10T13:50:02.048 INFO:teuthology.orchestra.run.vm00.stdout:

Node Exporter

2026-03-10T13:50:02.048 INFO:teuthology.orchestra.run.vm00.stdout:
2026-03-10T13:50:02.048 INFO:teuthology.orchestra.run.vm00.stdout:
2026-03-10T13:50:02.048 INFO:teuthology.orchestra.run.vm00.stdout:

Prometheus Node Exporter

2026-03-10T13:50:02.048 INFO:teuthology.orchestra.run.vm00.stdout:
Version: (version=1.7.0, branch=HEAD, revision=7333465abf9efba81876303bb57e6fadb946041b)
2026-03-10T13:50:02.048 INFO:teuthology.orchestra.run.vm00.stdout:
2026-03-10T13:50:02.048 INFO:teuthology.orchestra.run.vm00.stdout:
    2026-03-10T13:50:02.048 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-10T13:50:02.048 INFO:teuthology.orchestra.run.vm00.stdout:
  • Metrics
  • 2026-03-10T13:50:02.048 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-10T13:50:02.048 INFO:teuthology.orchestra.run.vm00.stdout:
2026-03-10T13:50:02.048 INFO:teuthology.orchestra.run.vm00.stdout:
2026-03-10T13:50:02.048 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-10T13:50:02.048 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-10T13:50:02.048 INFO:teuthology.orchestra.run.vm00.stdout:
2026-03-10T13:50:02.048 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-10T13:50:02.048 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-10T13:50:02.048 INFO:teuthology.orchestra.run.vm00.stderr:+ for ip in $ALL_HOST_IPS 2026-03-10T13:50:02.048 INFO:teuthology.orchestra.run.vm00.stderr:+ curl -s http://192.168.123.107:9100/metric 2026-03-10T13:50:02.050 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-10T13:50:02.050 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-10T13:50:02.050 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-10T13:50:02.050 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-10T13:50:02.050 INFO:teuthology.orchestra.run.vm00.stdout: Node Exporter 2026-03-10T13:50:02.050 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-10T13:50:02.051 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-10T13:50:02.051 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-10T13:50:02.051 INFO:teuthology.orchestra.run.vm00.stdout:
2026-03-10T13:50:02.051 INFO:teuthology.orchestra.run.vm00.stdout:

Node Exporter

2026-03-10T13:50:02.051 INFO:teuthology.orchestra.run.vm00.stdout:
2026-03-10T13:50:02.051 INFO:teuthology.orchestra.run.vm00.stdout:
2026-03-10T13:50:02.051 INFO:teuthology.orchestra.run.vm00.stdout:

Prometheus Node Exporter

2026-03-10T13:50:02.051 INFO:teuthology.orchestra.run.vm00.stdout:
Version: (version=1.7.0, branch=HEAD, revision=7333465abf9efba81876303bb57e6fadb946041b)
2026-03-10T13:50:02.051 INFO:teuthology.orchestra.run.vm00.stdout:
2026-03-10T13:50:02.051 INFO:teuthology.orchestra.run.vm00.stdout:
    2026-03-10T13:50:02.051 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-10T13:50:02.051 INFO:teuthology.orchestra.run.vm00.stdout:
  • Metrics
  • 2026-03-10T13:50:02.051 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-10T13:50:02.051 INFO:teuthology.orchestra.run.vm00.stdout:
2026-03-10T13:50:02.051 INFO:teuthology.orchestra.run.vm00.stdout:
2026-03-10T13:50:02.051 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-10T13:50:02.051 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-10T13:50:02.051 INFO:teuthology.orchestra.run.vm00.stdout:
2026-03-10T13:50:02.051 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-10T13:50:02.051 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-10T13:50:02.051 INFO:teuthology.orchestra.run.vm00.stderr:+ for ip in $ALL_HOST_IPS 2026-03-10T13:50:02.051 INFO:teuthology.orchestra.run.vm00.stderr:+ curl -s http://192.168.123.108:9100/metric 2026-03-10T13:50:02.053 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-10T13:50:02.053 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-10T13:50:02.053 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-10T13:50:02.053 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-10T13:50:02.053 INFO:teuthology.orchestra.run.vm00.stdout: Node Exporter 2026-03-10T13:50:02.053 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-10T13:50:02.053 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-10T13:50:02.053 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-10T13:50:02.053 INFO:teuthology.orchestra.run.vm00.stdout:
2026-03-10T13:50:02.053 INFO:teuthology.orchestra.run.vm00.stdout:

Node Exporter

2026-03-10T13:50:02.053 INFO:teuthology.orchestra.run.vm00.stdout:
2026-03-10T13:50:02.053 INFO:teuthology.orchestra.run.vm00.stdout:
2026-03-10T13:50:02.053 INFO:teuthology.orchestra.run.vm00.stdout:

Prometheus Node Exporter

2026-03-10T13:50:02.053 INFO:teuthology.orchestra.run.vm00.stdout:
Version: (version=1.7.0, branch=HEAD, revision=7333465abf9efba81876303bb57e6fadb946041b)
2026-03-10T13:50:02.053 INFO:teuthology.orchestra.run.vm00.stdout:
2026-03-10T13:50:02.053 INFO:teuthology.orchestra.run.vm00.stdout:
    2026-03-10T13:50:02.053 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-10T13:50:02.053 INFO:teuthology.orchestra.run.vm00.stdout:
  • Metrics
  • 2026-03-10T13:50:02.053 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-10T13:50:02.053 INFO:teuthology.orchestra.run.vm00.stdout:
2026-03-10T13:50:02.053 INFO:teuthology.orchestra.run.vm00.stdout:
2026-03-10T13:50:02.053 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-10T13:50:02.053 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-10T13:50:02.054 INFO:teuthology.orchestra.run.vm00.stdout:
2026-03-10T13:50:02.054 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-10T13:50:02.054 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-10T13:50:02.054 INFO:teuthology.orchestra.run.vm00.stderr:+ curl -k -s https://192.168.123.100:3000/api/health 2026-03-10T13:50:02.064 INFO:teuthology.orchestra.run.vm00.stdout:{ 2026-03-10T13:50:02.064 INFO:teuthology.orchestra.run.vm00.stdout: "commit": "03f502a94d17f7dc4e6c34acdf8428aedd986e4c", 2026-03-10T13:50:02.064 INFO:teuthology.orchestra.run.vm00.stdout: "database": "ok", 2026-03-10T13:50:02.064 INFO:teuthology.orchestra.run.vm00.stdout: "version": "10.4.0" 2026-03-10T13:50:02.065 INFO:teuthology.orchestra.run.vm00.stderr:+ jq -e '.database == "ok"' 2026-03-10T13:50:02.067 INFO:teuthology.orchestra.run.vm00.stderr:+ curl -k -s https://192.168.123.100:3000/api/health 2026-03-10T13:50:02.077 INFO:teuthology.orchestra.run.vm00.stdout:}true 2026-03-10T13:50:02.078 INFO:teuthology.orchestra.run.vm00.stderr:+ ceph orch daemon stop mon.c 2026-03-10T13:50:02.243 INFO:teuthology.orchestra.run.vm00.stdout:Scheduled to stop mon.c on host 'vm08' 2026-03-10T13:50:02.263 INFO:teuthology.orchestra.run.vm00.stderr:+ sleep 120 2026-03-10T13:50:02.339 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:50:02 vm08 bash[23387]: cluster 2026-03-10T13:50:00.671625+0000 mgr.a (mgr.14388) 133 : cluster [DBG] pgmap v107: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:50:02.339 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:50:02 vm08 bash[23387]: cluster 2026-03-10T13:50:00.671625+0000 mgr.a (mgr.14388) 133 : cluster [DBG] pgmap v107: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:50:02.339 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:50:02 vm08 bash[23387]: audit 2026-03-10T13:50:00.779129+0000 mgr.a (mgr.14388) 134 : audit [DBG] from='client.24313 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T13:50:02.339 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:50:02 vm08 bash[23387]: audit 2026-03-10T13:50:00.779129+0000 mgr.a (mgr.14388) 134 : audit [DBG] from='client.24313 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T13:50:02.339 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:50:02 vm08 bash[23387]: audit 2026-03-10T13:50:00.939046+0000 mgr.a (mgr.14388) 135 : audit [DBG] from='client.24316 -' entity='client.admin' cmd=[{"prefix": "orch ps", "daemon_type": "mon", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-10T13:50:02.339 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:50:02 vm08 bash[23387]: audit 2026-03-10T13:50:00.939046+0000 mgr.a (mgr.14388) 135 : audit [DBG] from='client.24316 -' entity='client.admin' cmd=[{"prefix": "orch ps", "daemon_type": "mon", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-10T13:50:02.339 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:50:02 vm08 bash[23387]: audit 2026-03-10T13:50:01.098152+0000 mgr.a (mgr.14388) 136 : audit [DBG] from='client.24319 -' entity='client.admin' cmd=[{"prefix": "orch ps", "daemon_type": "grafana", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-10T13:50:02.339 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:50:02 vm08 bash[23387]: audit 2026-03-10T13:50:01.098152+0000 mgr.a (mgr.14388) 136 : audit [DBG] from='client.24319 -' entity='client.admin' cmd=[{"prefix": "orch ps", "daemon_type": "grafana", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-10T13:50:02.339 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:50:02 vm08 bash[23387]: audit 2026-03-10T13:50:01.260532+0000 mgr.a (mgr.14388) 137 : audit [DBG] from='client.14454 -' entity='client.admin' cmd=[{"prefix": "orch ps", "daemon_type": "prometheus", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-10T13:50:02.339 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:50:02 vm08 bash[23387]: audit 2026-03-10T13:50:01.260532+0000 mgr.a (mgr.14388) 137 : audit [DBG] from='client.14454 -' entity='client.admin' cmd=[{"prefix": "orch ps", "daemon_type": "prometheus", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-10T13:50:02.339 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:50:02 vm08 bash[23387]: audit 2026-03-10T13:50:01.417479+0000 mgr.a (mgr.14388) 138 : audit [DBG] from='client.14460 -' entity='client.admin' cmd=[{"prefix": "orch ps", "daemon_type": "alertmanager", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-10T13:50:02.339 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:50:02 vm08 bash[23387]: audit 2026-03-10T13:50:01.417479+0000 mgr.a (mgr.14388) 138 : audit [DBG] from='client.14460 -' entity='client.admin' cmd=[{"prefix": "orch ps", "daemon_type": "alertmanager", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-10T13:50:02.339 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:50:02 vm08 bash[23387]: audit 2026-03-10T13:50:01.574060+0000 mgr.a (mgr.14388) 139 : audit [DBG] from='client.14466 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-10T13:50:02.339 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:50:02 vm08 bash[23387]: audit 2026-03-10T13:50:01.574060+0000 mgr.a (mgr.14388) 139 : audit [DBG] from='client.14466 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-10T13:50:02.468 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:50:02 vm00 bash[20748]: cluster 2026-03-10T13:50:00.671625+0000 mgr.a (mgr.14388) 133 : cluster [DBG] pgmap v107: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:50:02.468 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:50:02 vm00 bash[20748]: cluster 2026-03-10T13:50:00.671625+0000 mgr.a (mgr.14388) 133 : cluster [DBG] pgmap v107: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:50:02.468 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:50:02 vm00 bash[20748]: audit 2026-03-10T13:50:00.779129+0000 mgr.a (mgr.14388) 134 : audit [DBG] from='client.24313 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T13:50:02.468 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:50:02 vm00 bash[20748]: audit 2026-03-10T13:50:00.779129+0000 mgr.a (mgr.14388) 134 : audit [DBG] from='client.24313 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T13:50:02.468 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:50:02 vm00 bash[20748]: audit 2026-03-10T13:50:00.939046+0000 mgr.a (mgr.14388) 135 : audit [DBG] from='client.24316 -' entity='client.admin' cmd=[{"prefix": "orch ps", "daemon_type": "mon", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-10T13:50:02.468 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:50:02 vm00 bash[20748]: audit 2026-03-10T13:50:00.939046+0000 mgr.a (mgr.14388) 135 : audit [DBG] from='client.24316 -' entity='client.admin' cmd=[{"prefix": "orch ps", "daemon_type": "mon", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-10T13:50:02.468 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:50:02 vm00 bash[20748]: audit 2026-03-10T13:50:01.098152+0000 mgr.a (mgr.14388) 136 : audit [DBG] from='client.24319 -' entity='client.admin' cmd=[{"prefix": "orch ps", "daemon_type": "grafana", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-10T13:50:02.468 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:50:02 vm00 bash[20748]: audit 2026-03-10T13:50:01.098152+0000 mgr.a (mgr.14388) 136 : audit [DBG] from='client.24319 -' entity='client.admin' cmd=[{"prefix": "orch ps", "daemon_type": "grafana", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-10T13:50:02.468 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:50:02 vm00 bash[20748]: audit 2026-03-10T13:50:01.260532+0000 mgr.a (mgr.14388) 137 : audit [DBG] from='client.14454 -' entity='client.admin' cmd=[{"prefix": "orch ps", "daemon_type": "prometheus", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-10T13:50:02.468 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:50:02 vm00 bash[20748]: audit 2026-03-10T13:50:01.260532+0000 mgr.a (mgr.14388) 137 : audit [DBG] from='client.14454 -' entity='client.admin' cmd=[{"prefix": "orch ps", "daemon_type": "prometheus", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-10T13:50:02.468 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:50:02 vm00 bash[20748]: audit 2026-03-10T13:50:01.417479+0000 mgr.a (mgr.14388) 138 : audit [DBG] from='client.14460 -' entity='client.admin' cmd=[{"prefix": "orch ps", "daemon_type": "alertmanager", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-10T13:50:02.468 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:50:02 vm00 bash[20748]: audit 2026-03-10T13:50:01.417479+0000 mgr.a (mgr.14388) 138 : audit [DBG] from='client.14460 -' entity='client.admin' cmd=[{"prefix": "orch ps", "daemon_type": "alertmanager", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-10T13:50:02.468 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:50:02 vm00 bash[20748]: audit 2026-03-10T13:50:01.574060+0000 mgr.a (mgr.14388) 139 : audit [DBG] from='client.14466 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-10T13:50:02.468 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:50:02 vm00 bash[20748]: audit 2026-03-10T13:50:01.574060+0000 mgr.a (mgr.14388) 139 : audit [DBG] from='client.14466 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-10T13:50:02.498 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:50:02 vm07 bash[23044]: cluster 2026-03-10T13:50:00.671625+0000 mgr.a (mgr.14388) 133 : cluster [DBG] pgmap v107: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:50:02.498 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:50:02 vm07 bash[23044]: cluster 2026-03-10T13:50:00.671625+0000 mgr.a (mgr.14388) 133 : cluster [DBG] pgmap v107: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:50:02.498 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:50:02 vm07 bash[23044]: audit 2026-03-10T13:50:00.779129+0000 mgr.a (mgr.14388) 134 : audit [DBG] from='client.24313 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T13:50:02.498 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:50:02 vm07 bash[23044]: audit 2026-03-10T13:50:00.779129+0000 mgr.a (mgr.14388) 134 : audit [DBG] from='client.24313 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T13:50:02.498 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:50:02 vm07 bash[23044]: audit 2026-03-10T13:50:00.939046+0000 mgr.a (mgr.14388) 135 : audit [DBG] from='client.24316 -' entity='client.admin' cmd=[{"prefix": "orch ps", "daemon_type": "mon", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-10T13:50:02.498 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:50:02 vm07 bash[23044]: audit 2026-03-10T13:50:00.939046+0000 mgr.a (mgr.14388) 135 : audit [DBG] from='client.24316 -' entity='client.admin' cmd=[{"prefix": "orch ps", "daemon_type": "mon", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-10T13:50:02.498 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:50:02 vm07 bash[23044]: audit 2026-03-10T13:50:01.098152+0000 mgr.a (mgr.14388) 136 : audit [DBG] from='client.24319 -' entity='client.admin' cmd=[{"prefix": "orch ps", "daemon_type": "grafana", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-10T13:50:02.498 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:50:02 vm07 bash[23044]: audit 2026-03-10T13:50:01.098152+0000 mgr.a (mgr.14388) 136 : audit [DBG] from='client.24319 -' entity='client.admin' cmd=[{"prefix": "orch ps", "daemon_type": "grafana", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-10T13:50:02.498 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:50:02 vm07 bash[23044]: audit 2026-03-10T13:50:01.260532+0000 mgr.a (mgr.14388) 137 : audit [DBG] from='client.14454 -' entity='client.admin' cmd=[{"prefix": "orch ps", "daemon_type": "prometheus", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-10T13:50:02.498 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:50:02 vm07 bash[23044]: audit 2026-03-10T13:50:01.260532+0000 mgr.a (mgr.14388) 137 : audit [DBG] from='client.14454 -' entity='client.admin' cmd=[{"prefix": "orch ps", "daemon_type": "prometheus", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-10T13:50:02.498 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:50:02 vm07 bash[23044]: audit 2026-03-10T13:50:01.417479+0000 mgr.a (mgr.14388) 138 : audit [DBG] from='client.14460 -' entity='client.admin' cmd=[{"prefix": "orch ps", "daemon_type": "alertmanager", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-10T13:50:02.498 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:50:02 vm07 bash[23044]: audit 2026-03-10T13:50:01.417479+0000 mgr.a (mgr.14388) 138 : audit [DBG] from='client.14460 -' entity='client.admin' cmd=[{"prefix": "orch ps", "daemon_type": "alertmanager", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-10T13:50:02.498 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:50:02 vm07 bash[23044]: audit 2026-03-10T13:50:01.574060+0000 mgr.a (mgr.14388) 139 : audit [DBG] from='client.14466 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-10T13:50:02.498 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:50:02 vm07 bash[23044]: audit 2026-03-10T13:50:01.574060+0000 mgr.a (mgr.14388) 139 : audit [DBG] from='client.14466 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-10T13:50:03.498 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:50:03 vm07 bash[23044]: audit 2026-03-10T13:50:01.726758+0000 mgr.a (mgr.14388) 140 : audit [DBG] from='client.14472 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-10T13:50:03.498 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:50:03 vm07 bash[23044]: audit 2026-03-10T13:50:01.726758+0000 mgr.a (mgr.14388) 140 : audit [DBG] from='client.14472 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-10T13:50:03.498 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:50:03 vm07 bash[23044]: audit 2026-03-10T13:50:01.881009+0000 mgr.a (mgr.14388) 141 : audit [DBG] from='client.14478 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-10T13:50:03.498 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:50:03 vm07 bash[23044]: audit 2026-03-10T13:50:01.881009+0000 mgr.a (mgr.14388) 141 : audit [DBG] from='client.14478 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-10T13:50:03.498 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:50:03 vm07 bash[23044]: audit 2026-03-10T13:50:02.034968+0000 mgr.a (mgr.14388) 142 : audit [DBG] from='client.14484 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-10T13:50:03.498 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:50:03 vm07 bash[23044]: audit 2026-03-10T13:50:02.034968+0000 mgr.a (mgr.14388) 142 : audit [DBG] from='client.14484 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-10T13:50:03.498 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:50:03 vm07 bash[23044]: audit 2026-03-10T13:50:02.229135+0000 mgr.a (mgr.14388) 143 : audit [DBG] from='client.24326 -' entity='client.admin' cmd=[{"prefix": "orch daemon", "action": "stop", "name": "mon.c", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T13:50:03.498 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:50:03 vm07 bash[23044]: audit 2026-03-10T13:50:02.229135+0000 mgr.a (mgr.14388) 143 : audit [DBG] from='client.24326 -' entity='client.admin' cmd=[{"prefix": "orch daemon", "action": "stop", "name": "mon.c", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T13:50:03.498 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:50:03 vm07 bash[23044]: cephadm 2026-03-10T13:50:02.229577+0000 mgr.a (mgr.14388) 144 : cephadm [INF] Schedule stop daemon mon.c 2026-03-10T13:50:03.498 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:50:03 vm07 bash[23044]: cephadm 2026-03-10T13:50:02.229577+0000 mgr.a (mgr.14388) 144 : cephadm [INF] Schedule stop daemon mon.c 2026-03-10T13:50:03.498 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:50:03 vm07 bash[23044]: audit 2026-03-10T13:50:02.235296+0000 mon.a (mon.0) 581 : audit [INF] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' 2026-03-10T13:50:03.498 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:50:03 vm07 bash[23044]: audit 2026-03-10T13:50:02.235296+0000 mon.a (mon.0) 581 : audit [INF] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' 2026-03-10T13:50:03.498 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:50:03 vm07 bash[23044]: audit 2026-03-10T13:50:02.241240+0000 mon.a (mon.0) 582 : audit [INF] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' 2026-03-10T13:50:03.498 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:50:03 vm07 bash[23044]: audit 2026-03-10T13:50:02.241240+0000 mon.a (mon.0) 582 : audit [INF] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' 2026-03-10T13:50:03.498 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:50:03 vm07 bash[23044]: audit 2026-03-10T13:50:02.242542+0000 mon.a (mon.0) 583 : audit [DBG] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T13:50:03.498 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:50:03 vm07 bash[23044]: audit 2026-03-10T13:50:02.242542+0000 mon.a (mon.0) 583 : audit [DBG] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T13:50:03.498 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:50:03 vm07 bash[23044]: audit 2026-03-10T13:50:02.244238+0000 mon.a (mon.0) 584 : audit [DBG] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T13:50:03.498 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:50:03 vm07 bash[23044]: audit 2026-03-10T13:50:02.244238+0000 mon.a (mon.0) 584 : audit [DBG] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T13:50:03.498 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:50:03 vm07 bash[23044]: audit 2026-03-10T13:50:02.246546+0000 mon.a (mon.0) 585 : audit [INF] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T13:50:03.498 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:50:03 vm07 bash[23044]: audit 2026-03-10T13:50:02.246546+0000 mon.a (mon.0) 585 : audit [INF] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T13:50:03.498 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:50:03 vm07 bash[23044]: audit 2026-03-10T13:50:02.251907+0000 mon.a (mon.0) 586 : audit [INF] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' 2026-03-10T13:50:03.498 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:50:03 vm07 bash[23044]: audit 2026-03-10T13:50:02.251907+0000 mon.a (mon.0) 586 : audit [INF] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' 2026-03-10T13:50:03.498 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:50:03 vm07 bash[23044]: audit 2026-03-10T13:50:02.695729+0000 mon.a (mon.0) 587 : audit [DBG] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T13:50:03.498 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:50:03 vm07 bash[23044]: audit 2026-03-10T13:50:02.695729+0000 mon.a (mon.0) 587 : audit [DBG] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T13:50:03.589 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:50:03 vm08 bash[23387]: audit 2026-03-10T13:50:01.726758+0000 mgr.a (mgr.14388) 140 : audit [DBG] from='client.14472 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-10T13:50:03.589 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:50:03 vm08 bash[23387]: audit 2026-03-10T13:50:01.726758+0000 mgr.a (mgr.14388) 140 : audit [DBG] from='client.14472 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-10T13:50:03.589 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:50:03 vm08 bash[23387]: audit 2026-03-10T13:50:01.881009+0000 mgr.a (mgr.14388) 141 : audit [DBG] from='client.14478 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-10T13:50:03.589 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:50:03 vm08 bash[23387]: audit 2026-03-10T13:50:01.881009+0000 mgr.a (mgr.14388) 141 : audit [DBG] from='client.14478 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-10T13:50:03.589 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:50:03 vm08 bash[23387]: audit 2026-03-10T13:50:02.034968+0000 mgr.a (mgr.14388) 142 : audit [DBG] from='client.14484 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-10T13:50:03.589 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:50:03 vm08 bash[23387]: audit 2026-03-10T13:50:02.034968+0000 mgr.a (mgr.14388) 142 : audit [DBG] from='client.14484 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-10T13:50:03.589 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:50:03 vm08 bash[23387]: audit 2026-03-10T13:50:02.229135+0000 mgr.a (mgr.14388) 143 : audit [DBG] from='client.24326 -' entity='client.admin' cmd=[{"prefix": "orch daemon", "action": "stop", "name": "mon.c", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T13:50:03.589 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:50:03 vm08 bash[23387]: audit 2026-03-10T13:50:02.229135+0000 mgr.a (mgr.14388) 143 : audit [DBG] from='client.24326 -' entity='client.admin' cmd=[{"prefix": "orch daemon", "action": "stop", "name": "mon.c", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T13:50:03.589 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:50:03 vm08 bash[23387]: cephadm 2026-03-10T13:50:02.229577+0000 mgr.a (mgr.14388) 144 : cephadm [INF] Schedule stop daemon mon.c 2026-03-10T13:50:03.589 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:50:03 vm08 bash[23387]: cephadm 2026-03-10T13:50:02.229577+0000 mgr.a (mgr.14388) 144 : cephadm [INF] Schedule stop daemon mon.c 2026-03-10T13:50:03.589 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:50:03 vm08 bash[23387]: audit 2026-03-10T13:50:02.235296+0000 mon.a (mon.0) 581 : audit [INF] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' 2026-03-10T13:50:03.589 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:50:03 vm08 bash[23387]: audit 2026-03-10T13:50:02.235296+0000 mon.a (mon.0) 581 : audit [INF] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' 2026-03-10T13:50:03.589 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:50:03 vm08 bash[23387]: audit 2026-03-10T13:50:02.241240+0000 mon.a (mon.0) 582 : audit [INF] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' 2026-03-10T13:50:03.589 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:50:03 vm08 bash[23387]: audit 2026-03-10T13:50:02.241240+0000 mon.a (mon.0) 582 : audit [INF] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' 2026-03-10T13:50:03.589 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:50:03 vm08 bash[23387]: audit 2026-03-10T13:50:02.242542+0000 mon.a (mon.0) 583 : audit [DBG] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T13:50:03.589 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:50:03 vm08 bash[23387]: audit 2026-03-10T13:50:02.242542+0000 mon.a (mon.0) 583 : audit [DBG] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T13:50:03.589 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:50:03 vm08 bash[23387]: audit 2026-03-10T13:50:02.244238+0000 mon.a (mon.0) 584 : audit [DBG] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T13:50:03.589 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:50:03 vm08 bash[23387]: audit 2026-03-10T13:50:02.244238+0000 mon.a (mon.0) 584 : audit [DBG] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T13:50:03.589 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:50:03 vm08 bash[23387]: audit 2026-03-10T13:50:02.246546+0000 mon.a (mon.0) 585 : audit [INF] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T13:50:03.589 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:50:03 vm08 bash[23387]: audit 2026-03-10T13:50:02.246546+0000 mon.a (mon.0) 585 : audit [INF] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T13:50:03.589 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:50:03 vm08 bash[23387]: audit 2026-03-10T13:50:02.251907+0000 mon.a (mon.0) 586 : audit [INF] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' 2026-03-10T13:50:03.589 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:50:03 vm08 bash[23387]: audit 2026-03-10T13:50:02.251907+0000 mon.a (mon.0) 586 : audit [INF] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' 2026-03-10T13:50:03.589 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:50:03 vm08 bash[23387]: audit 2026-03-10T13:50:02.695729+0000 mon.a (mon.0) 587 : audit [DBG] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T13:50:03.589 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:50:03 vm08 bash[23387]: audit 2026-03-10T13:50:02.695729+0000 mon.a (mon.0) 587 : audit [DBG] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T13:50:03.718 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:50:03 vm00 bash[20748]: audit 2026-03-10T13:50:01.726758+0000 mgr.a (mgr.14388) 140 : audit [DBG] from='client.14472 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-10T13:50:03.718 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:50:03 vm00 bash[20748]: audit 2026-03-10T13:50:01.726758+0000 mgr.a (mgr.14388) 140 : audit [DBG] from='client.14472 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-10T13:50:03.718 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:50:03 vm00 bash[20748]: audit 2026-03-10T13:50:01.881009+0000 mgr.a (mgr.14388) 141 : audit [DBG] from='client.14478 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-10T13:50:03.718 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:50:03 vm00 bash[20748]: audit 2026-03-10T13:50:01.881009+0000 mgr.a (mgr.14388) 141 : audit [DBG] from='client.14478 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-10T13:50:03.718 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:50:03 vm00 bash[20748]: audit 2026-03-10T13:50:02.034968+0000 mgr.a (mgr.14388) 142 : audit [DBG] from='client.14484 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-10T13:50:03.718 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:50:03 vm00 bash[20748]: audit 2026-03-10T13:50:02.034968+0000 mgr.a (mgr.14388) 142 : audit [DBG] from='client.14484 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-10T13:50:03.718 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:50:03 vm00 bash[20748]: audit 2026-03-10T13:50:02.229135+0000 mgr.a (mgr.14388) 143 : audit [DBG] from='client.24326 -' entity='client.admin' cmd=[{"prefix": "orch daemon", "action": "stop", "name": "mon.c", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T13:50:03.718 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:50:03 vm00 bash[20748]: audit 2026-03-10T13:50:02.229135+0000 mgr.a (mgr.14388) 143 : audit [DBG] from='client.24326 -' entity='client.admin' cmd=[{"prefix": "orch daemon", "action": "stop", "name": "mon.c", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T13:50:03.718 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:50:03 vm00 bash[20748]: cephadm 2026-03-10T13:50:02.229577+0000 mgr.a (mgr.14388) 144 : cephadm [INF] Schedule stop daemon mon.c 2026-03-10T13:50:03.718 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:50:03 vm00 bash[20748]: cephadm 2026-03-10T13:50:02.229577+0000 mgr.a (mgr.14388) 144 : cephadm [INF] Schedule stop daemon mon.c 2026-03-10T13:50:03.718 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:50:03 vm00 bash[20748]: audit 2026-03-10T13:50:02.235296+0000 mon.a (mon.0) 581 : audit [INF] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' 2026-03-10T13:50:03.718 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:50:03 vm00 bash[20748]: audit 2026-03-10T13:50:02.235296+0000 mon.a (mon.0) 581 : audit [INF] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' 2026-03-10T13:50:03.718 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:50:03 vm00 bash[20748]: audit 2026-03-10T13:50:02.241240+0000 mon.a (mon.0) 582 : audit [INF] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' 2026-03-10T13:50:03.718 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:50:03 vm00 bash[20748]: audit 2026-03-10T13:50:02.241240+0000 mon.a (mon.0) 582 : audit [INF] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' 2026-03-10T13:50:03.718 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:50:03 vm00 bash[20748]: audit 2026-03-10T13:50:02.242542+0000 mon.a (mon.0) 583 : audit [DBG] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T13:50:03.718 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:50:03 vm00 bash[20748]: audit 2026-03-10T13:50:02.242542+0000 mon.a (mon.0) 583 : audit [DBG] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T13:50:03.718 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:50:03 vm00 bash[20748]: audit 2026-03-10T13:50:02.244238+0000 mon.a (mon.0) 584 : audit [DBG] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T13:50:03.718 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:50:03 vm00 bash[20748]: audit 2026-03-10T13:50:02.244238+0000 mon.a (mon.0) 584 : audit [DBG] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T13:50:03.718 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:50:03 vm00 bash[20748]: audit 2026-03-10T13:50:02.246546+0000 mon.a (mon.0) 585 : audit [INF] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T13:50:03.718 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:50:03 vm00 bash[20748]: audit 2026-03-10T13:50:02.246546+0000 mon.a (mon.0) 585 : audit [INF] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T13:50:03.718 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:50:03 vm00 bash[20748]: audit 2026-03-10T13:50:02.251907+0000 mon.a (mon.0) 586 : audit [INF] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' 2026-03-10T13:50:03.718 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:50:03 vm00 bash[20748]: audit 2026-03-10T13:50:02.251907+0000 mon.a (mon.0) 586 : audit [INF] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' 2026-03-10T13:50:03.718 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:50:03 vm00 bash[20748]: audit 2026-03-10T13:50:02.695729+0000 mon.a (mon.0) 587 : audit [DBG] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T13:50:03.718 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:50:03 vm00 bash[20748]: audit 2026-03-10T13:50:02.695729+0000 mon.a (mon.0) 587 : audit [DBG] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T13:50:04.589 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:50:04 vm08 bash[23387]: cluster 2026-03-10T13:50:02.671984+0000 mgr.a (mgr.14388) 145 : cluster [DBG] pgmap v108: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:50:04.589 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:50:04 vm08 bash[23387]: cluster 2026-03-10T13:50:02.671984+0000 mgr.a (mgr.14388) 145 : cluster [DBG] pgmap v108: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:50:04.718 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:50:04 vm00 bash[20748]: cluster 2026-03-10T13:50:02.671984+0000 mgr.a (mgr.14388) 145 : cluster [DBG] pgmap v108: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:50:04.718 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:50:04 vm00 bash[20748]: cluster 2026-03-10T13:50:02.671984+0000 mgr.a (mgr.14388) 145 : cluster [DBG] pgmap v108: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:50:04.747 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:50:04 vm07 bash[23044]: cluster 2026-03-10T13:50:02.671984+0000 mgr.a (mgr.14388) 145 : cluster [DBG] pgmap v108: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:50:04.747 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:50:04 vm07 bash[23044]: cluster 2026-03-10T13:50:02.671984+0000 mgr.a (mgr.14388) 145 : cluster [DBG] pgmap v108: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:50:06.589 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:50:06 vm08 bash[23387]: cluster 2026-03-10T13:50:04.672366+0000 mgr.a (mgr.14388) 146 : cluster [DBG] pgmap v109: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:50:06.589 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:50:06 vm08 bash[23387]: cluster 2026-03-10T13:50:04.672366+0000 mgr.a (mgr.14388) 146 : cluster [DBG] pgmap v109: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:50:06.624 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:50:06 vm00 bash[20748]: cluster 2026-03-10T13:50:04.672366+0000 mgr.a (mgr.14388) 146 : cluster [DBG] pgmap v109: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:50:06.624 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:50:06 vm00 bash[20748]: cluster 2026-03-10T13:50:04.672366+0000 mgr.a (mgr.14388) 146 : cluster [DBG] pgmap v109: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:50:06.747 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:50:06 vm07 bash[23044]: cluster 2026-03-10T13:50:04.672366+0000 mgr.a (mgr.14388) 146 : cluster [DBG] pgmap v109: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:50:06.747 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:50:06 vm07 bash[23044]: cluster 2026-03-10T13:50:04.672366+0000 mgr.a (mgr.14388) 146 : cluster [DBG] pgmap v109: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:50:06.967 INFO:journalctl@ceph.mgr.a.vm00.stdout:Mar 10 13:50:06 vm00 bash[21015]: ::ffff:192.168.123.107 - - [10/Mar/2026:13:50:06] "GET /metrics HTTP/1.1" 200 21336 "" "Prometheus/2.51.0" 2026-03-10T13:50:07.299 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:50:07 vm08 systemd[1]: Stopping Ceph mon.c for c9620084-1c86-11f1-bcc5-e3fb709eab0a... 2026-03-10T13:50:07.299 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:50:07 vm08 bash[23387]: debug 2026-03-10T13:50:07.061+0000 7fe3600a9640 -1 received signal: Terminated from /sbin/docker-init -- /usr/bin/ceph-mon -n mon.c -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-stderr=true --default-log-stderr-prefix=debug --default-mon-cluster-log-to-file=false --default-mon-cluster-log-to-stderr=true (PID: 1) UID: 0 2026-03-10T13:50:07.299 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:50:07 vm08 bash[23387]: debug 2026-03-10T13:50:07.061+0000 7fe3600a9640 -1 mon.c@1(peon) e3 *** Got Signal Terminated *** 2026-03-10T13:50:07.299 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:50:07 vm08 bash[31221]: ceph-c9620084-1c86-11f1-bcc5-e3fb709eab0a-mon-c 2026-03-10T13:50:07.588 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:50:07 vm08 systemd[1]: ceph-c9620084-1c86-11f1-bcc5-e3fb709eab0a@mon.c.service: Deactivated successfully. 2026-03-10T13:50:07.589 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 10 13:50:07 vm08 systemd[1]: Stopped Ceph mon.c for c9620084-1c86-11f1-bcc5-e3fb709eab0a. 2026-03-10T13:50:16.968 INFO:journalctl@ceph.mgr.a.vm00.stdout:Mar 10 13:50:16 vm00 bash[21015]: ::ffff:192.168.123.107 - - [10/Mar/2026:13:50:16] "GET /metrics HTTP/1.1" 200 21336 "" "Prometheus/2.51.0" 2026-03-10T13:50:22.718 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:50:22 vm00 bash[20748]: cluster 2026-03-10T13:50:06.672625+0000 mgr.a (mgr.14388) 147 : cluster [DBG] pgmap v110: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:50:22.718 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:50:22 vm00 bash[20748]: cluster 2026-03-10T13:50:06.672625+0000 mgr.a (mgr.14388) 147 : cluster [DBG] pgmap v110: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:50:22.718 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:50:22 vm00 bash[20748]: cluster 2026-03-10T13:50:08.672819+0000 mgr.a (mgr.14388) 148 : cluster [DBG] pgmap v111: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:50:22.718 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:50:22 vm00 bash[20748]: cluster 2026-03-10T13:50:08.672819+0000 mgr.a (mgr.14388) 148 : cluster [DBG] pgmap v111: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:50:22.718 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:50:22 vm00 bash[20748]: cluster 2026-03-10T13:50:10.673063+0000 mgr.a (mgr.14388) 149 : cluster [DBG] pgmap v112: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:50:22.718 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:50:22 vm00 bash[20748]: cluster 2026-03-10T13:50:10.673063+0000 mgr.a (mgr.14388) 149 : cluster [DBG] pgmap v112: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:50:22.718 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:50:22 vm00 bash[20748]: cluster 2026-03-10T13:50:12.673299+0000 mgr.a (mgr.14388) 150 : cluster [DBG] pgmap v113: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:50:22.718 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:50:22 vm00 bash[20748]: cluster 2026-03-10T13:50:12.673299+0000 mgr.a (mgr.14388) 150 : cluster [DBG] pgmap v113: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:50:22.718 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:50:22 vm00 bash[20748]: cluster 2026-03-10T13:50:14.673539+0000 mgr.a (mgr.14388) 151 : cluster [DBG] pgmap v114: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:50:22.718 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:50:22 vm00 bash[20748]: cluster 2026-03-10T13:50:14.673539+0000 mgr.a (mgr.14388) 151 : cluster [DBG] pgmap v114: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:50:22.718 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:50:22 vm00 bash[20748]: cluster 2026-03-10T13:50:16.271292+0000 mon.b (mon.2) 16 : cluster [INF] mon.b calling monitor election 2026-03-10T13:50:22.718 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:50:22 vm00 bash[20748]: cluster 2026-03-10T13:50:16.271292+0000 mon.b (mon.2) 16 : cluster [INF] mon.b calling monitor election 2026-03-10T13:50:22.718 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:50:22 vm00 bash[20748]: cluster 2026-03-10T13:50:16.273595+0000 mon.a (mon.0) 588 : cluster [INF] mon.a calling monitor election 2026-03-10T13:50:22.718 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:50:22 vm00 bash[20748]: cluster 2026-03-10T13:50:16.273595+0000 mon.a (mon.0) 588 : cluster [INF] mon.a calling monitor election 2026-03-10T13:50:22.719 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:50:22 vm00 bash[20748]: cluster 2026-03-10T13:50:16.673737+0000 mgr.a (mgr.14388) 152 : cluster [DBG] pgmap v115: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:50:22.719 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:50:22 vm00 bash[20748]: cluster 2026-03-10T13:50:16.673737+0000 mgr.a (mgr.14388) 152 : cluster [DBG] pgmap v115: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:50:22.719 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:50:22 vm00 bash[20748]: audit 2026-03-10T13:50:17.695970+0000 mon.a (mon.0) 589 : audit [DBG] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T13:50:22.719 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:50:22 vm00 bash[20748]: audit 2026-03-10T13:50:17.695970+0000 mon.a (mon.0) 589 : audit [DBG] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T13:50:22.719 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:50:22 vm00 bash[20748]: cluster 2026-03-10T13:50:18.673994+0000 mgr.a (mgr.14388) 153 : cluster [DBG] pgmap v116: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:50:22.719 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:50:22 vm00 bash[20748]: cluster 2026-03-10T13:50:18.673994+0000 mgr.a (mgr.14388) 153 : cluster [DBG] pgmap v116: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:50:22.719 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:50:22 vm00 bash[20748]: cluster 2026-03-10T13:50:20.674211+0000 mgr.a (mgr.14388) 154 : cluster [DBG] pgmap v117: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:50:22.719 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:50:22 vm00 bash[20748]: cluster 2026-03-10T13:50:20.674211+0000 mgr.a (mgr.14388) 154 : cluster [DBG] pgmap v117: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:50:22.719 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:50:22 vm00 bash[20748]: cluster 2026-03-10T13:50:21.277268+0000 mon.a (mon.0) 590 : cluster [INF] mon.a is new leader, mons a,b in quorum (ranks 0,2) 2026-03-10T13:50:22.719 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:50:22 vm00 bash[20748]: cluster 2026-03-10T13:50:21.277268+0000 mon.a (mon.0) 590 : cluster [INF] mon.a is new leader, mons a,b in quorum (ranks 0,2) 2026-03-10T13:50:22.719 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:50:22 vm00 bash[20748]: cluster 2026-03-10T13:50:21.282621+0000 mon.a (mon.0) 591 : cluster [DBG] monmap epoch 3 2026-03-10T13:50:22.719 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:50:22 vm00 bash[20748]: cluster 2026-03-10T13:50:21.282621+0000 mon.a (mon.0) 591 : cluster [DBG] monmap epoch 3 2026-03-10T13:50:22.719 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:50:22 vm00 bash[20748]: cluster 2026-03-10T13:50:21.282647+0000 mon.a (mon.0) 592 : cluster [DBG] fsid c9620084-1c86-11f1-bcc5-e3fb709eab0a 2026-03-10T13:50:22.719 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:50:22 vm00 bash[20748]: cluster 2026-03-10T13:50:21.282647+0000 mon.a (mon.0) 592 : cluster [DBG] fsid c9620084-1c86-11f1-bcc5-e3fb709eab0a 2026-03-10T13:50:22.719 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:50:22 vm00 bash[20748]: cluster 2026-03-10T13:50:21.282656+0000 mon.a (mon.0) 593 : cluster [DBG] last_changed 2026-03-10T13:43:17.480839+0000 2026-03-10T13:50:22.719 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:50:22 vm00 bash[20748]: cluster 2026-03-10T13:50:21.282656+0000 mon.a (mon.0) 593 : cluster [DBG] last_changed 2026-03-10T13:43:17.480839+0000 2026-03-10T13:50:22.719 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:50:22 vm00 bash[20748]: cluster 2026-03-10T13:50:21.282760+0000 mon.a (mon.0) 594 : cluster [DBG] created 2026-03-10T13:42:07.014183+0000 2026-03-10T13:50:22.719 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:50:22 vm00 bash[20748]: cluster 2026-03-10T13:50:21.282760+0000 mon.a (mon.0) 594 : cluster [DBG] created 2026-03-10T13:42:07.014183+0000 2026-03-10T13:50:22.719 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:50:22 vm00 bash[20748]: cluster 2026-03-10T13:50:21.282777+0000 mon.a (mon.0) 595 : cluster [DBG] min_mon_release 19 (squid) 2026-03-10T13:50:22.719 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:50:22 vm00 bash[20748]: cluster 2026-03-10T13:50:21.282777+0000 mon.a (mon.0) 595 : cluster [DBG] min_mon_release 19 (squid) 2026-03-10T13:50:22.719 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:50:22 vm00 bash[20748]: cluster 2026-03-10T13:50:21.282790+0000 mon.a (mon.0) 596 : cluster [DBG] election_strategy: 1 2026-03-10T13:50:22.719 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:50:22 vm00 bash[20748]: cluster 2026-03-10T13:50:21.282790+0000 mon.a (mon.0) 596 : cluster [DBG] election_strategy: 1 2026-03-10T13:50:22.719 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:50:22 vm00 bash[20748]: cluster 2026-03-10T13:50:21.282803+0000 mon.a (mon.0) 597 : cluster [DBG] 0: [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] mon.a 2026-03-10T13:50:22.719 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:50:22 vm00 bash[20748]: cluster 2026-03-10T13:50:21.282803+0000 mon.a (mon.0) 597 : cluster [DBG] 0: [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] mon.a 2026-03-10T13:50:22.719 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:50:22 vm00 bash[20748]: cluster 2026-03-10T13:50:21.282816+0000 mon.a (mon.0) 598 : cluster [DBG] 1: [v2:192.168.123.108:3300/0,v1:192.168.123.108:6789/0] mon.c 2026-03-10T13:50:22.719 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:50:22 vm00 bash[20748]: cluster 2026-03-10T13:50:21.282816+0000 mon.a (mon.0) 598 : cluster [DBG] 1: [v2:192.168.123.108:3300/0,v1:192.168.123.108:6789/0] mon.c 2026-03-10T13:50:22.719 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:50:22 vm00 bash[20748]: cluster 2026-03-10T13:50:21.282830+0000 mon.a (mon.0) 599 : cluster [DBG] 2: [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] mon.b 2026-03-10T13:50:22.719 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:50:22 vm00 bash[20748]: cluster 2026-03-10T13:50:21.282830+0000 mon.a (mon.0) 599 : cluster [DBG] 2: [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] mon.b 2026-03-10T13:50:22.719 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:50:22 vm00 bash[20748]: cluster 2026-03-10T13:50:21.283317+0000 mon.a (mon.0) 600 : cluster [DBG] fsmap 2026-03-10T13:50:22.719 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:50:22 vm00 bash[20748]: cluster 2026-03-10T13:50:21.283317+0000 mon.a (mon.0) 600 : cluster [DBG] fsmap 2026-03-10T13:50:22.719 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:50:22 vm00 bash[20748]: cluster 2026-03-10T13:50:21.283341+0000 mon.a (mon.0) 601 : cluster [DBG] osdmap e23: 3 total, 3 up, 3 in 2026-03-10T13:50:22.719 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:50:22 vm00 bash[20748]: cluster 2026-03-10T13:50:21.283341+0000 mon.a (mon.0) 601 : cluster [DBG] osdmap e23: 3 total, 3 up, 3 in 2026-03-10T13:50:22.719 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:50:22 vm00 bash[20748]: cluster 2026-03-10T13:50:21.284358+0000 mon.a (mon.0) 602 : cluster [DBG] mgrmap e19: a(active, since 3m), standbys: b 2026-03-10T13:50:22.719 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:50:22 vm00 bash[20748]: cluster 2026-03-10T13:50:21.284358+0000 mon.a (mon.0) 602 : cluster [DBG] mgrmap e19: a(active, since 3m), standbys: b 2026-03-10T13:50:22.719 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:50:22 vm00 bash[20748]: cluster 2026-03-10T13:50:21.284480+0000 mon.a (mon.0) 603 : cluster [WRN] Health check failed: 1/3 mons down, quorum a,b (MON_DOWN) 2026-03-10T13:50:22.719 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:50:22 vm00 bash[20748]: cluster 2026-03-10T13:50:21.284480+0000 mon.a (mon.0) 603 : cluster [WRN] Health check failed: 1/3 mons down, quorum a,b (MON_DOWN) 2026-03-10T13:50:22.719 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:50:22 vm00 bash[20748]: audit 2026-03-10T13:50:21.298144+0000 mon.a (mon.0) 604 : audit [INF] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' 2026-03-10T13:50:22.719 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:50:22 vm00 bash[20748]: audit 2026-03-10T13:50:21.298144+0000 mon.a (mon.0) 604 : audit [INF] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' 2026-03-10T13:50:22.719 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:50:22 vm00 bash[20748]: cluster 2026-03-10T13:50:21.298365+0000 mon.a (mon.0) 605 : cluster [WRN] Health detail: HEALTH_WARN 1/3 mons down, quorum a,b 2026-03-10T13:50:22.719 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:50:22 vm00 bash[20748]: cluster 2026-03-10T13:50:21.298365+0000 mon.a (mon.0) 605 : cluster [WRN] Health detail: HEALTH_WARN 1/3 mons down, quorum a,b 2026-03-10T13:50:22.719 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:50:22 vm00 bash[20748]: cluster 2026-03-10T13:50:21.298378+0000 mon.a (mon.0) 606 : cluster [WRN] [WRN] MON_DOWN: 1/3 mons down, quorum a,b 2026-03-10T13:50:22.719 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:50:22 vm00 bash[20748]: cluster 2026-03-10T13:50:21.298378+0000 mon.a (mon.0) 606 : cluster [WRN] [WRN] MON_DOWN: 1/3 mons down, quorum a,b 2026-03-10T13:50:22.719 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:50:22 vm00 bash[20748]: cluster 2026-03-10T13:50:21.298391+0000 mon.a (mon.0) 607 : cluster [WRN] mon.c (rank 1) addr [v2:192.168.123.108:3300/0,v1:192.168.123.108:6789/0] is down (out of quorum) 2026-03-10T13:50:22.719 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:50:22 vm00 bash[20748]: cluster 2026-03-10T13:50:21.298391+0000 mon.a (mon.0) 607 : cluster [WRN] mon.c (rank 1) addr [v2:192.168.123.108:3300/0,v1:192.168.123.108:6789/0] is down (out of quorum) 2026-03-10T13:50:22.719 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:50:22 vm00 bash[20748]: audit 2026-03-10T13:50:21.302296+0000 mon.a (mon.0) 608 : audit [INF] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' 2026-03-10T13:50:22.719 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:50:22 vm00 bash[20748]: audit 2026-03-10T13:50:21.302296+0000 mon.a (mon.0) 608 : audit [INF] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' 2026-03-10T13:50:22.719 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:50:22 vm00 bash[20748]: audit 2026-03-10T13:50:21.329604+0000 mon.a (mon.0) 609 : audit [DBG] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T13:50:22.719 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:50:22 vm00 bash[20748]: audit 2026-03-10T13:50:21.329604+0000 mon.a (mon.0) 609 : audit [DBG] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T13:50:22.748 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:50:22 vm07 bash[23044]: cluster 2026-03-10T13:50:06.672625+0000 mgr.a (mgr.14388) 147 : cluster [DBG] pgmap v110: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:50:22.748 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:50:22 vm07 bash[23044]: cluster 2026-03-10T13:50:06.672625+0000 mgr.a (mgr.14388) 147 : cluster [DBG] pgmap v110: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:50:22.748 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:50:22 vm07 bash[23044]: cluster 2026-03-10T13:50:08.672819+0000 mgr.a (mgr.14388) 148 : cluster [DBG] pgmap v111: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:50:22.748 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:50:22 vm07 bash[23044]: cluster 2026-03-10T13:50:08.672819+0000 mgr.a (mgr.14388) 148 : cluster [DBG] pgmap v111: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:50:22.748 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:50:22 vm07 bash[23044]: cluster 2026-03-10T13:50:10.673063+0000 mgr.a (mgr.14388) 149 : cluster [DBG] pgmap v112: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:50:22.748 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:50:22 vm07 bash[23044]: cluster 2026-03-10T13:50:10.673063+0000 mgr.a (mgr.14388) 149 : cluster [DBG] pgmap v112: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:50:22.748 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:50:22 vm07 bash[23044]: cluster 2026-03-10T13:50:12.673299+0000 mgr.a (mgr.14388) 150 : cluster [DBG] pgmap v113: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:50:22.748 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:50:22 vm07 bash[23044]: cluster 2026-03-10T13:50:12.673299+0000 mgr.a (mgr.14388) 150 : cluster [DBG] pgmap v113: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:50:22.748 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:50:22 vm07 bash[23044]: cluster 2026-03-10T13:50:14.673539+0000 mgr.a (mgr.14388) 151 : cluster [DBG] pgmap v114: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:50:22.748 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:50:22 vm07 bash[23044]: cluster 2026-03-10T13:50:14.673539+0000 mgr.a (mgr.14388) 151 : cluster [DBG] pgmap v114: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:50:22.748 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:50:22 vm07 bash[23044]: cluster 2026-03-10T13:50:16.271292+0000 mon.b (mon.2) 16 : cluster [INF] mon.b calling monitor election 2026-03-10T13:50:22.748 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:50:22 vm07 bash[23044]: cluster 2026-03-10T13:50:16.271292+0000 mon.b (mon.2) 16 : cluster [INF] mon.b calling monitor election 2026-03-10T13:50:22.748 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:50:22 vm07 bash[23044]: cluster 2026-03-10T13:50:16.273595+0000 mon.a (mon.0) 588 : cluster [INF] mon.a calling monitor election 2026-03-10T13:50:22.748 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:50:22 vm07 bash[23044]: cluster 2026-03-10T13:50:16.273595+0000 mon.a (mon.0) 588 : cluster [INF] mon.a calling monitor election 2026-03-10T13:50:22.748 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:50:22 vm07 bash[23044]: cluster 2026-03-10T13:50:16.673737+0000 mgr.a (mgr.14388) 152 : cluster [DBG] pgmap v115: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:50:22.748 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:50:22 vm07 bash[23044]: cluster 2026-03-10T13:50:16.673737+0000 mgr.a (mgr.14388) 152 : cluster [DBG] pgmap v115: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:50:22.748 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:50:22 vm07 bash[23044]: audit 2026-03-10T13:50:17.695970+0000 mon.a (mon.0) 589 : audit [DBG] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T13:50:22.748 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:50:22 vm07 bash[23044]: audit 2026-03-10T13:50:17.695970+0000 mon.a (mon.0) 589 : audit [DBG] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T13:50:22.748 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:50:22 vm07 bash[23044]: cluster 2026-03-10T13:50:18.673994+0000 mgr.a (mgr.14388) 153 : cluster [DBG] pgmap v116: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:50:22.748 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:50:22 vm07 bash[23044]: cluster 2026-03-10T13:50:18.673994+0000 mgr.a (mgr.14388) 153 : cluster [DBG] pgmap v116: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:50:22.748 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:50:22 vm07 bash[23044]: cluster 2026-03-10T13:50:20.674211+0000 mgr.a (mgr.14388) 154 : cluster [DBG] pgmap v117: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:50:22.748 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:50:22 vm07 bash[23044]: cluster 2026-03-10T13:50:20.674211+0000 mgr.a (mgr.14388) 154 : cluster [DBG] pgmap v117: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:50:22.748 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:50:22 vm07 bash[23044]: cluster 2026-03-10T13:50:21.277268+0000 mon.a (mon.0) 590 : cluster [INF] mon.a is new leader, mons a,b in quorum (ranks 0,2) 2026-03-10T13:50:22.748 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:50:22 vm07 bash[23044]: cluster 2026-03-10T13:50:21.277268+0000 mon.a (mon.0) 590 : cluster [INF] mon.a is new leader, mons a,b in quorum (ranks 0,2) 2026-03-10T13:50:22.748 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:50:22 vm07 bash[23044]: cluster 2026-03-10T13:50:21.282621+0000 mon.a (mon.0) 591 : cluster [DBG] monmap epoch 3 2026-03-10T13:50:22.748 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:50:22 vm07 bash[23044]: cluster 2026-03-10T13:50:21.282621+0000 mon.a (mon.0) 591 : cluster [DBG] monmap epoch 3 2026-03-10T13:50:22.748 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:50:22 vm07 bash[23044]: cluster 2026-03-10T13:50:21.282647+0000 mon.a (mon.0) 592 : cluster [DBG] fsid c9620084-1c86-11f1-bcc5-e3fb709eab0a 2026-03-10T13:50:22.748 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:50:22 vm07 bash[23044]: cluster 2026-03-10T13:50:21.282647+0000 mon.a (mon.0) 592 : cluster [DBG] fsid c9620084-1c86-11f1-bcc5-e3fb709eab0a 2026-03-10T13:50:22.748 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:50:22 vm07 bash[23044]: cluster 2026-03-10T13:50:21.282656+0000 mon.a (mon.0) 593 : cluster [DBG] last_changed 2026-03-10T13:43:17.480839+0000 2026-03-10T13:50:22.748 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:50:22 vm07 bash[23044]: cluster 2026-03-10T13:50:21.282656+0000 mon.a (mon.0) 593 : cluster [DBG] last_changed 2026-03-10T13:43:17.480839+0000 2026-03-10T13:50:22.748 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:50:22 vm07 bash[23044]: cluster 2026-03-10T13:50:21.282760+0000 mon.a (mon.0) 594 : cluster [DBG] created 2026-03-10T13:42:07.014183+0000 2026-03-10T13:50:22.748 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:50:22 vm07 bash[23044]: cluster 2026-03-10T13:50:21.282760+0000 mon.a (mon.0) 594 : cluster [DBG] created 2026-03-10T13:42:07.014183+0000 2026-03-10T13:50:22.748 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:50:22 vm07 bash[23044]: cluster 2026-03-10T13:50:21.282777+0000 mon.a (mon.0) 595 : cluster [DBG] min_mon_release 19 (squid) 2026-03-10T13:50:22.748 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:50:22 vm07 bash[23044]: cluster 2026-03-10T13:50:21.282777+0000 mon.a (mon.0) 595 : cluster [DBG] min_mon_release 19 (squid) 2026-03-10T13:50:22.748 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:50:22 vm07 bash[23044]: cluster 2026-03-10T13:50:21.282790+0000 mon.a (mon.0) 596 : cluster [DBG] election_strategy: 1 2026-03-10T13:50:22.748 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:50:22 vm07 bash[23044]: cluster 2026-03-10T13:50:21.282790+0000 mon.a (mon.0) 596 : cluster [DBG] election_strategy: 1 2026-03-10T13:50:22.748 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:50:22 vm07 bash[23044]: cluster 2026-03-10T13:50:21.282803+0000 mon.a (mon.0) 597 : cluster [DBG] 0: [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] mon.a 2026-03-10T13:50:22.748 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:50:22 vm07 bash[23044]: cluster 2026-03-10T13:50:21.282803+0000 mon.a (mon.0) 597 : cluster [DBG] 0: [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] mon.a 2026-03-10T13:50:22.748 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:50:22 vm07 bash[23044]: cluster 2026-03-10T13:50:21.282816+0000 mon.a (mon.0) 598 : cluster [DBG] 1: [v2:192.168.123.108:3300/0,v1:192.168.123.108:6789/0] mon.c 2026-03-10T13:50:22.748 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:50:22 vm07 bash[23044]: cluster 2026-03-10T13:50:21.282816+0000 mon.a (mon.0) 598 : cluster [DBG] 1: [v2:192.168.123.108:3300/0,v1:192.168.123.108:6789/0] mon.c 2026-03-10T13:50:22.748 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:50:22 vm07 bash[23044]: cluster 2026-03-10T13:50:21.282830+0000 mon.a (mon.0) 599 : cluster [DBG] 2: [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] mon.b 2026-03-10T13:50:22.748 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:50:22 vm07 bash[23044]: cluster 2026-03-10T13:50:21.282830+0000 mon.a (mon.0) 599 : cluster [DBG] 2: [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] mon.b 2026-03-10T13:50:22.748 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:50:22 vm07 bash[23044]: cluster 2026-03-10T13:50:21.283317+0000 mon.a (mon.0) 600 : cluster [DBG] fsmap 2026-03-10T13:50:22.748 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:50:22 vm07 bash[23044]: cluster 2026-03-10T13:50:21.283317+0000 mon.a (mon.0) 600 : cluster [DBG] fsmap 2026-03-10T13:50:22.748 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:50:22 vm07 bash[23044]: cluster 2026-03-10T13:50:21.283341+0000 mon.a (mon.0) 601 : cluster [DBG] osdmap e23: 3 total, 3 up, 3 in 2026-03-10T13:50:22.748 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:50:22 vm07 bash[23044]: cluster 2026-03-10T13:50:21.283341+0000 mon.a (mon.0) 601 : cluster [DBG] osdmap e23: 3 total, 3 up, 3 in 2026-03-10T13:50:22.748 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:50:22 vm07 bash[23044]: cluster 2026-03-10T13:50:21.284358+0000 mon.a (mon.0) 602 : cluster [DBG] mgrmap e19: a(active, since 3m), standbys: b 2026-03-10T13:50:22.748 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:50:22 vm07 bash[23044]: cluster 2026-03-10T13:50:21.284358+0000 mon.a (mon.0) 602 : cluster [DBG] mgrmap e19: a(active, since 3m), standbys: b 2026-03-10T13:50:22.748 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:50:22 vm07 bash[23044]: cluster 2026-03-10T13:50:21.284480+0000 mon.a (mon.0) 603 : cluster [WRN] Health check failed: 1/3 mons down, quorum a,b (MON_DOWN) 2026-03-10T13:50:22.748 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:50:22 vm07 bash[23044]: cluster 2026-03-10T13:50:21.284480+0000 mon.a (mon.0) 603 : cluster [WRN] Health check failed: 1/3 mons down, quorum a,b (MON_DOWN) 2026-03-10T13:50:22.748 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:50:22 vm07 bash[23044]: audit 2026-03-10T13:50:21.298144+0000 mon.a (mon.0) 604 : audit [INF] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' 2026-03-10T13:50:22.749 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:50:22 vm07 bash[23044]: audit 2026-03-10T13:50:21.298144+0000 mon.a (mon.0) 604 : audit [INF] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' 2026-03-10T13:50:22.749 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:50:22 vm07 bash[23044]: cluster 2026-03-10T13:50:21.298365+0000 mon.a (mon.0) 605 : cluster [WRN] Health detail: HEALTH_WARN 1/3 mons down, quorum a,b 2026-03-10T13:50:22.749 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:50:22 vm07 bash[23044]: cluster 2026-03-10T13:50:21.298365+0000 mon.a (mon.0) 605 : cluster [WRN] Health detail: HEALTH_WARN 1/3 mons down, quorum a,b 2026-03-10T13:50:22.749 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:50:22 vm07 bash[23044]: cluster 2026-03-10T13:50:21.298378+0000 mon.a (mon.0) 606 : cluster [WRN] [WRN] MON_DOWN: 1/3 mons down, quorum a,b 2026-03-10T13:50:22.749 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:50:22 vm07 bash[23044]: cluster 2026-03-10T13:50:21.298378+0000 mon.a (mon.0) 606 : cluster [WRN] [WRN] MON_DOWN: 1/3 mons down, quorum a,b 2026-03-10T13:50:22.749 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:50:22 vm07 bash[23044]: cluster 2026-03-10T13:50:21.298391+0000 mon.a (mon.0) 607 : cluster [WRN] mon.c (rank 1) addr [v2:192.168.123.108:3300/0,v1:192.168.123.108:6789/0] is down (out of quorum) 2026-03-10T13:50:22.749 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:50:22 vm07 bash[23044]: cluster 2026-03-10T13:50:21.298391+0000 mon.a (mon.0) 607 : cluster [WRN] mon.c (rank 1) addr [v2:192.168.123.108:3300/0,v1:192.168.123.108:6789/0] is down (out of quorum) 2026-03-10T13:50:22.749 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:50:22 vm07 bash[23044]: audit 2026-03-10T13:50:21.302296+0000 mon.a (mon.0) 608 : audit [INF] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' 2026-03-10T13:50:22.749 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:50:22 vm07 bash[23044]: audit 2026-03-10T13:50:21.302296+0000 mon.a (mon.0) 608 : audit [INF] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' 2026-03-10T13:50:22.749 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:50:22 vm07 bash[23044]: audit 2026-03-10T13:50:21.329604+0000 mon.a (mon.0) 609 : audit [DBG] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T13:50:22.749 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:50:22 vm07 bash[23044]: audit 2026-03-10T13:50:21.329604+0000 mon.a (mon.0) 609 : audit [DBG] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T13:50:24.717 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:50:24 vm00 bash[20748]: cluster 2026-03-10T13:50:22.674430+0000 mgr.a (mgr.14388) 155 : cluster [DBG] pgmap v118: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:50:24.718 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:50:24 vm00 bash[20748]: cluster 2026-03-10T13:50:22.674430+0000 mgr.a (mgr.14388) 155 : cluster [DBG] pgmap v118: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:50:24.747 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:50:24 vm07 bash[23044]: cluster 2026-03-10T13:50:22.674430+0000 mgr.a (mgr.14388) 155 : cluster [DBG] pgmap v118: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:50:24.747 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:50:24 vm07 bash[23044]: cluster 2026-03-10T13:50:22.674430+0000 mgr.a (mgr.14388) 155 : cluster [DBG] pgmap v118: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:50:26.623 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:50:26 vm00 bash[20748]: cluster 2026-03-10T13:50:24.674685+0000 mgr.a (mgr.14388) 156 : cluster [DBG] pgmap v119: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:50:26.623 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:50:26 vm00 bash[20748]: cluster 2026-03-10T13:50:24.674685+0000 mgr.a (mgr.14388) 156 : cluster [DBG] pgmap v119: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:50:26.747 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:50:26 vm07 bash[23044]: cluster 2026-03-10T13:50:24.674685+0000 mgr.a (mgr.14388) 156 : cluster [DBG] pgmap v119: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:50:26.747 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:50:26 vm07 bash[23044]: cluster 2026-03-10T13:50:24.674685+0000 mgr.a (mgr.14388) 156 : cluster [DBG] pgmap v119: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:50:26.968 INFO:journalctl@ceph.mgr.a.vm00.stdout:Mar 10 13:50:26 vm00 bash[21015]: ::ffff:192.168.123.107 - - [10/Mar/2026:13:50:26] "GET /metrics HTTP/1.1" 200 21333 "" "Prometheus/2.51.0" 2026-03-10T13:50:27.717 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:50:27 vm00 bash[20748]: audit 2026-03-10T13:50:26.369903+0000 mon.a (mon.0) 610 : audit [INF] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' 2026-03-10T13:50:27.718 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:50:27 vm00 bash[20748]: audit 2026-03-10T13:50:26.369903+0000 mon.a (mon.0) 610 : audit [INF] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' 2026-03-10T13:50:27.718 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:50:27 vm00 bash[20748]: audit 2026-03-10T13:50:26.373493+0000 mon.a (mon.0) 611 : audit [INF] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' 2026-03-10T13:50:27.718 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:50:27 vm00 bash[20748]: audit 2026-03-10T13:50:26.373493+0000 mon.a (mon.0) 611 : audit [INF] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' 2026-03-10T13:50:27.718 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:50:27 vm00 bash[20748]: audit 2026-03-10T13:50:26.374032+0000 mon.a (mon.0) 612 : audit [DBG] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T13:50:27.718 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:50:27 vm00 bash[20748]: audit 2026-03-10T13:50:26.374032+0000 mon.a (mon.0) 612 : audit [DBG] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T13:50:27.718 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:50:27 vm00 bash[20748]: audit 2026-03-10T13:50:26.374462+0000 mon.a (mon.0) 613 : audit [INF] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T13:50:27.718 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:50:27 vm00 bash[20748]: audit 2026-03-10T13:50:26.374462+0000 mon.a (mon.0) 613 : audit [INF] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T13:50:27.718 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:50:27 vm00 bash[20748]: audit 2026-03-10T13:50:26.377361+0000 mon.a (mon.0) 614 : audit [INF] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' 2026-03-10T13:50:27.718 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:50:27 vm00 bash[20748]: audit 2026-03-10T13:50:26.377361+0000 mon.a (mon.0) 614 : audit [INF] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' 2026-03-10T13:50:27.747 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:50:27 vm07 bash[23044]: audit 2026-03-10T13:50:26.369903+0000 mon.a (mon.0) 610 : audit [INF] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' 2026-03-10T13:50:27.747 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:50:27 vm07 bash[23044]: audit 2026-03-10T13:50:26.369903+0000 mon.a (mon.0) 610 : audit [INF] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' 2026-03-10T13:50:27.747 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:50:27 vm07 bash[23044]: audit 2026-03-10T13:50:26.373493+0000 mon.a (mon.0) 611 : audit [INF] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' 2026-03-10T13:50:27.747 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:50:27 vm07 bash[23044]: audit 2026-03-10T13:50:26.373493+0000 mon.a (mon.0) 611 : audit [INF] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' 2026-03-10T13:50:27.747 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:50:27 vm07 bash[23044]: audit 2026-03-10T13:50:26.374032+0000 mon.a (mon.0) 612 : audit [DBG] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T13:50:27.747 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:50:27 vm07 bash[23044]: audit 2026-03-10T13:50:26.374032+0000 mon.a (mon.0) 612 : audit [DBG] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T13:50:27.747 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:50:27 vm07 bash[23044]: audit 2026-03-10T13:50:26.374462+0000 mon.a (mon.0) 613 : audit [INF] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T13:50:27.748 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:50:27 vm07 bash[23044]: audit 2026-03-10T13:50:26.374462+0000 mon.a (mon.0) 613 : audit [INF] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T13:50:27.748 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:50:27 vm07 bash[23044]: audit 2026-03-10T13:50:26.377361+0000 mon.a (mon.0) 614 : audit [INF] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' 2026-03-10T13:50:27.748 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:50:27 vm07 bash[23044]: audit 2026-03-10T13:50:26.377361+0000 mon.a (mon.0) 614 : audit [INF] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' 2026-03-10T13:50:28.717 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:50:28 vm00 bash[20748]: cluster 2026-03-10T13:50:26.674860+0000 mgr.a (mgr.14388) 157 : cluster [DBG] pgmap v120: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:50:28.717 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:50:28 vm00 bash[20748]: cluster 2026-03-10T13:50:26.674860+0000 mgr.a (mgr.14388) 157 : cluster [DBG] pgmap v120: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:50:28.747 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:50:28 vm07 bash[23044]: cluster 2026-03-10T13:50:26.674860+0000 mgr.a (mgr.14388) 157 : cluster [DBG] pgmap v120: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:50:28.747 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:50:28 vm07 bash[23044]: cluster 2026-03-10T13:50:26.674860+0000 mgr.a (mgr.14388) 157 : cluster [DBG] pgmap v120: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:50:30.718 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:50:30 vm00 bash[20748]: cluster 2026-03-10T13:50:28.675049+0000 mgr.a (mgr.14388) 158 : cluster [DBG] pgmap v121: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:50:30.718 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:50:30 vm00 bash[20748]: cluster 2026-03-10T13:50:28.675049+0000 mgr.a (mgr.14388) 158 : cluster [DBG] pgmap v121: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:50:30.747 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:50:30 vm07 bash[23044]: cluster 2026-03-10T13:50:28.675049+0000 mgr.a (mgr.14388) 158 : cluster [DBG] pgmap v121: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:50:30.747 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:50:30 vm07 bash[23044]: cluster 2026-03-10T13:50:28.675049+0000 mgr.a (mgr.14388) 158 : cluster [DBG] pgmap v121: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:50:32.718 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:50:32 vm00 bash[20748]: cluster 2026-03-10T13:50:30.675249+0000 mgr.a (mgr.14388) 159 : cluster [DBG] pgmap v122: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:50:32.718 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:50:32 vm00 bash[20748]: cluster 2026-03-10T13:50:30.675249+0000 mgr.a (mgr.14388) 159 : cluster [DBG] pgmap v122: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:50:32.747 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:50:32 vm07 bash[23044]: cluster 2026-03-10T13:50:30.675249+0000 mgr.a (mgr.14388) 159 : cluster [DBG] pgmap v122: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:50:32.747 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:50:32 vm07 bash[23044]: cluster 2026-03-10T13:50:30.675249+0000 mgr.a (mgr.14388) 159 : cluster [DBG] pgmap v122: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:50:33.968 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:50:33 vm00 bash[20748]: cluster 2026-03-10T13:50:32.675462+0000 mgr.a (mgr.14388) 160 : cluster [DBG] pgmap v123: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:50:33.968 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:50:33 vm00 bash[20748]: cluster 2026-03-10T13:50:32.675462+0000 mgr.a (mgr.14388) 160 : cluster [DBG] pgmap v123: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:50:33.968 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:50:33 vm00 bash[20748]: audit 2026-03-10T13:50:32.698433+0000 mon.a (mon.0) 615 : audit [INF] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' 2026-03-10T13:50:33.968 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:50:33 vm00 bash[20748]: audit 2026-03-10T13:50:32.698433+0000 mon.a (mon.0) 615 : audit [INF] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' 2026-03-10T13:50:33.968 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:50:33 vm00 bash[20748]: audit 2026-03-10T13:50:32.698993+0000 mon.a (mon.0) 616 : audit [DBG] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T13:50:33.968 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:50:33 vm00 bash[20748]: audit 2026-03-10T13:50:32.698993+0000 mon.a (mon.0) 616 : audit [DBG] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T13:50:33.997 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:50:33 vm07 bash[23044]: cluster 2026-03-10T13:50:32.675462+0000 mgr.a (mgr.14388) 160 : cluster [DBG] pgmap v123: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:50:33.997 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:50:33 vm07 bash[23044]: cluster 2026-03-10T13:50:32.675462+0000 mgr.a (mgr.14388) 160 : cluster [DBG] pgmap v123: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:50:33.997 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:50:33 vm07 bash[23044]: audit 2026-03-10T13:50:32.698433+0000 mon.a (mon.0) 615 : audit [INF] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' 2026-03-10T13:50:33.997 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:50:33 vm07 bash[23044]: audit 2026-03-10T13:50:32.698433+0000 mon.a (mon.0) 615 : audit [INF] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' 2026-03-10T13:50:33.997 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:50:33 vm07 bash[23044]: audit 2026-03-10T13:50:32.698993+0000 mon.a (mon.0) 616 : audit [DBG] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T13:50:33.997 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:50:33 vm07 bash[23044]: audit 2026-03-10T13:50:32.698993+0000 mon.a (mon.0) 616 : audit [DBG] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T13:50:35.968 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:50:35 vm00 bash[20748]: cluster 2026-03-10T13:50:34.675725+0000 mgr.a (mgr.14388) 161 : cluster [DBG] pgmap v124: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:50:35.968 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:50:35 vm00 bash[20748]: cluster 2026-03-10T13:50:34.675725+0000 mgr.a (mgr.14388) 161 : cluster [DBG] pgmap v124: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:50:35.997 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:50:35 vm07 bash[23044]: cluster 2026-03-10T13:50:34.675725+0000 mgr.a (mgr.14388) 161 : cluster [DBG] pgmap v124: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:50:35.997 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:50:35 vm07 bash[23044]: cluster 2026-03-10T13:50:34.675725+0000 mgr.a (mgr.14388) 161 : cluster [DBG] pgmap v124: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:50:36.967 INFO:journalctl@ceph.mgr.a.vm00.stdout:Mar 10 13:50:36 vm00 bash[21015]: ::ffff:192.168.123.107 - - [10/Mar/2026:13:50:36] "GET /metrics HTTP/1.1" 200 21395 "" "Prometheus/2.51.0" 2026-03-10T13:50:37.967 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:50:37 vm00 bash[20748]: cluster 2026-03-10T13:50:36.675952+0000 mgr.a (mgr.14388) 162 : cluster [DBG] pgmap v125: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:50:37.967 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:50:37 vm00 bash[20748]: cluster 2026-03-10T13:50:36.675952+0000 mgr.a (mgr.14388) 162 : cluster [DBG] pgmap v125: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:50:37.997 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:50:37 vm07 bash[23044]: cluster 2026-03-10T13:50:36.675952+0000 mgr.a (mgr.14388) 162 : cluster [DBG] pgmap v125: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:50:37.997 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:50:37 vm07 bash[23044]: cluster 2026-03-10T13:50:36.675952+0000 mgr.a (mgr.14388) 162 : cluster [DBG] pgmap v125: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:50:39.967 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:50:39 vm00 bash[20748]: cluster 2026-03-10T13:50:38.676134+0000 mgr.a (mgr.14388) 163 : cluster [DBG] pgmap v126: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:50:39.968 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:50:39 vm00 bash[20748]: cluster 2026-03-10T13:50:38.676134+0000 mgr.a (mgr.14388) 163 : cluster [DBG] pgmap v126: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:50:39.997 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:50:39 vm07 bash[23044]: cluster 2026-03-10T13:50:38.676134+0000 mgr.a (mgr.14388) 163 : cluster [DBG] pgmap v126: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:50:39.997 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:50:39 vm07 bash[23044]: cluster 2026-03-10T13:50:38.676134+0000 mgr.a (mgr.14388) 163 : cluster [DBG] pgmap v126: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:50:41.997 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:50:41 vm07 bash[23044]: cluster 2026-03-10T13:50:40.676369+0000 mgr.a (mgr.14388) 164 : cluster [DBG] pgmap v127: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:50:41.997 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:50:41 vm07 bash[23044]: cluster 2026-03-10T13:50:40.676369+0000 mgr.a (mgr.14388) 164 : cluster [DBG] pgmap v127: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:50:42.217 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:50:41 vm00 bash[20748]: cluster 2026-03-10T13:50:40.676369+0000 mgr.a (mgr.14388) 164 : cluster [DBG] pgmap v127: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:50:42.217 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:50:41 vm00 bash[20748]: cluster 2026-03-10T13:50:40.676369+0000 mgr.a (mgr.14388) 164 : cluster [DBG] pgmap v127: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:50:43.997 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:50:43 vm07 bash[23044]: cluster 2026-03-10T13:50:42.676592+0000 mgr.a (mgr.14388) 165 : cluster [DBG] pgmap v128: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:50:43.997 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:50:43 vm07 bash[23044]: cluster 2026-03-10T13:50:42.676592+0000 mgr.a (mgr.14388) 165 : cluster [DBG] pgmap v128: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:50:44.218 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:50:43 vm00 bash[20748]: cluster 2026-03-10T13:50:42.676592+0000 mgr.a (mgr.14388) 165 : cluster [DBG] pgmap v128: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:50:44.218 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:50:43 vm00 bash[20748]: cluster 2026-03-10T13:50:42.676592+0000 mgr.a (mgr.14388) 165 : cluster [DBG] pgmap v128: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:50:45.997 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:50:45 vm07 bash[23044]: cluster 2026-03-10T13:50:44.676830+0000 mgr.a (mgr.14388) 166 : cluster [DBG] pgmap v129: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:50:45.997 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:50:45 vm07 bash[23044]: cluster 2026-03-10T13:50:44.676830+0000 mgr.a (mgr.14388) 166 : cluster [DBG] pgmap v129: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:50:46.218 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:50:45 vm00 bash[20748]: cluster 2026-03-10T13:50:44.676830+0000 mgr.a (mgr.14388) 166 : cluster [DBG] pgmap v129: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:50:46.218 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:50:45 vm00 bash[20748]: cluster 2026-03-10T13:50:44.676830+0000 mgr.a (mgr.14388) 166 : cluster [DBG] pgmap v129: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:50:46.968 INFO:journalctl@ceph.mgr.a.vm00.stdout:Mar 10 13:50:46 vm00 bash[21015]: ::ffff:192.168.123.107 - - [10/Mar/2026:13:50:46] "GET /metrics HTTP/1.1" 200 21395 "" "Prometheus/2.51.0" 2026-03-10T13:50:47.997 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:50:47 vm07 bash[23044]: cluster 2026-03-10T13:50:46.677037+0000 mgr.a (mgr.14388) 167 : cluster [DBG] pgmap v130: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:50:47.998 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:50:47 vm07 bash[23044]: cluster 2026-03-10T13:50:46.677037+0000 mgr.a (mgr.14388) 167 : cluster [DBG] pgmap v130: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:50:47.998 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:50:47 vm07 bash[23044]: audit 2026-03-10T13:50:47.696287+0000 mon.a (mon.0) 617 : audit [DBG] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T13:50:47.998 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:50:47 vm07 bash[23044]: audit 2026-03-10T13:50:47.696287+0000 mon.a (mon.0) 617 : audit [DBG] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T13:50:48.218 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:50:47 vm00 bash[20748]: cluster 2026-03-10T13:50:46.677037+0000 mgr.a (mgr.14388) 167 : cluster [DBG] pgmap v130: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:50:48.218 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:50:47 vm00 bash[20748]: cluster 2026-03-10T13:50:46.677037+0000 mgr.a (mgr.14388) 167 : cluster [DBG] pgmap v130: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:50:48.218 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:50:47 vm00 bash[20748]: audit 2026-03-10T13:50:47.696287+0000 mon.a (mon.0) 617 : audit [DBG] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T13:50:48.218 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:50:47 vm00 bash[20748]: audit 2026-03-10T13:50:47.696287+0000 mon.a (mon.0) 617 : audit [DBG] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T13:50:49.997 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:50:49 vm07 bash[23044]: cluster 2026-03-10T13:50:48.677245+0000 mgr.a (mgr.14388) 168 : cluster [DBG] pgmap v131: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:50:49.997 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:50:49 vm07 bash[23044]: cluster 2026-03-10T13:50:48.677245+0000 mgr.a (mgr.14388) 168 : cluster [DBG] pgmap v131: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:50:50.218 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:50:49 vm00 bash[20748]: cluster 2026-03-10T13:50:48.677245+0000 mgr.a (mgr.14388) 168 : cluster [DBG] pgmap v131: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:50:50.218 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:50:49 vm00 bash[20748]: cluster 2026-03-10T13:50:48.677245+0000 mgr.a (mgr.14388) 168 : cluster [DBG] pgmap v131: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:50:52.218 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:50:51 vm00 bash[20748]: cluster 2026-03-10T13:50:50.677466+0000 mgr.a (mgr.14388) 169 : cluster [DBG] pgmap v132: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:50:52.218 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:50:51 vm00 bash[20748]: cluster 2026-03-10T13:50:50.677466+0000 mgr.a (mgr.14388) 169 : cluster [DBG] pgmap v132: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:50:52.247 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:50:51 vm07 bash[23044]: cluster 2026-03-10T13:50:50.677466+0000 mgr.a (mgr.14388) 169 : cluster [DBG] pgmap v132: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:50:52.247 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:50:51 vm07 bash[23044]: cluster 2026-03-10T13:50:50.677466+0000 mgr.a (mgr.14388) 169 : cluster [DBG] pgmap v132: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:50:54.218 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:50:53 vm00 bash[20748]: cluster 2026-03-10T13:50:52.677727+0000 mgr.a (mgr.14388) 170 : cluster [DBG] pgmap v133: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:50:54.218 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:50:53 vm00 bash[20748]: cluster 2026-03-10T13:50:52.677727+0000 mgr.a (mgr.14388) 170 : cluster [DBG] pgmap v133: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:50:54.247 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:50:53 vm07 bash[23044]: cluster 2026-03-10T13:50:52.677727+0000 mgr.a (mgr.14388) 170 : cluster [DBG] pgmap v133: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:50:54.247 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:50:53 vm07 bash[23044]: cluster 2026-03-10T13:50:52.677727+0000 mgr.a (mgr.14388) 170 : cluster [DBG] pgmap v133: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:50:56.218 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:50:55 vm00 bash[20748]: cluster 2026-03-10T13:50:54.677938+0000 mgr.a (mgr.14388) 171 : cluster [DBG] pgmap v134: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:50:56.218 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:50:55 vm00 bash[20748]: cluster 2026-03-10T13:50:54.677938+0000 mgr.a (mgr.14388) 171 : cluster [DBG] pgmap v134: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:50:56.247 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:50:55 vm07 bash[23044]: cluster 2026-03-10T13:50:54.677938+0000 mgr.a (mgr.14388) 171 : cluster [DBG] pgmap v134: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:50:56.247 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:50:55 vm07 bash[23044]: cluster 2026-03-10T13:50:54.677938+0000 mgr.a (mgr.14388) 171 : cluster [DBG] pgmap v134: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:50:56.968 INFO:journalctl@ceph.mgr.a.vm00.stdout:Mar 10 13:50:56 vm00 bash[21015]: ::ffff:192.168.123.107 - - [10/Mar/2026:13:50:56] "GET /metrics HTTP/1.1" 200 21394 "" "Prometheus/2.51.0" 2026-03-10T13:50:58.218 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:50:57 vm00 bash[20748]: cluster 2026-03-10T13:50:56.678188+0000 mgr.a (mgr.14388) 172 : cluster [DBG] pgmap v135: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:50:58.218 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:50:57 vm00 bash[20748]: cluster 2026-03-10T13:50:56.678188+0000 mgr.a (mgr.14388) 172 : cluster [DBG] pgmap v135: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:50:58.247 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:50:57 vm07 bash[23044]: cluster 2026-03-10T13:50:56.678188+0000 mgr.a (mgr.14388) 172 : cluster [DBG] pgmap v135: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:50:58.247 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:50:57 vm07 bash[23044]: cluster 2026-03-10T13:50:56.678188+0000 mgr.a (mgr.14388) 172 : cluster [DBG] pgmap v135: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:51:00.218 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:50:59 vm00 bash[20748]: cluster 2026-03-10T13:50:58.678401+0000 mgr.a (mgr.14388) 173 : cluster [DBG] pgmap v136: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:51:00.218 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:50:59 vm00 bash[20748]: cluster 2026-03-10T13:50:58.678401+0000 mgr.a (mgr.14388) 173 : cluster [DBG] pgmap v136: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:51:00.247 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:50:59 vm07 bash[23044]: cluster 2026-03-10T13:50:58.678401+0000 mgr.a (mgr.14388) 173 : cluster [DBG] pgmap v136: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:51:00.247 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:50:59 vm07 bash[23044]: cluster 2026-03-10T13:50:58.678401+0000 mgr.a (mgr.14388) 173 : cluster [DBG] pgmap v136: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:51:02.218 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:51:01 vm00 bash[20748]: cluster 2026-03-10T13:51:00.678626+0000 mgr.a (mgr.14388) 174 : cluster [DBG] pgmap v137: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:51:02.218 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:51:01 vm00 bash[20748]: cluster 2026-03-10T13:51:00.678626+0000 mgr.a (mgr.14388) 174 : cluster [DBG] pgmap v137: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:51:02.247 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:51:01 vm07 bash[23044]: cluster 2026-03-10T13:51:00.678626+0000 mgr.a (mgr.14388) 174 : cluster [DBG] pgmap v137: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:51:02.247 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:51:01 vm07 bash[23044]: cluster 2026-03-10T13:51:00.678626+0000 mgr.a (mgr.14388) 174 : cluster [DBG] pgmap v137: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:51:03.218 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:51:02 vm00 bash[20748]: audit 2026-03-10T13:51:02.696496+0000 mon.a (mon.0) 618 : audit [DBG] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T13:51:03.218 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:51:02 vm00 bash[20748]: audit 2026-03-10T13:51:02.696496+0000 mon.a (mon.0) 618 : audit [DBG] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T13:51:03.247 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:51:02 vm07 bash[23044]: audit 2026-03-10T13:51:02.696496+0000 mon.a (mon.0) 618 : audit [DBG] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T13:51:03.247 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:51:02 vm07 bash[23044]: audit 2026-03-10T13:51:02.696496+0000 mon.a (mon.0) 618 : audit [DBG] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T13:51:04.218 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:51:03 vm00 bash[20748]: cluster 2026-03-10T13:51:02.678888+0000 mgr.a (mgr.14388) 175 : cluster [DBG] pgmap v138: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:51:04.218 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:51:03 vm00 bash[20748]: cluster 2026-03-10T13:51:02.678888+0000 mgr.a (mgr.14388) 175 : cluster [DBG] pgmap v138: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:51:04.247 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:51:03 vm07 bash[23044]: cluster 2026-03-10T13:51:02.678888+0000 mgr.a (mgr.14388) 175 : cluster [DBG] pgmap v138: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:51:04.247 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:51:03 vm07 bash[23044]: cluster 2026-03-10T13:51:02.678888+0000 mgr.a (mgr.14388) 175 : cluster [DBG] pgmap v138: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:51:06.218 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:51:05 vm00 bash[20748]: cluster 2026-03-10T13:51:04.679166+0000 mgr.a (mgr.14388) 176 : cluster [DBG] pgmap v139: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:51:06.218 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:51:05 vm00 bash[20748]: cluster 2026-03-10T13:51:04.679166+0000 mgr.a (mgr.14388) 176 : cluster [DBG] pgmap v139: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:51:06.247 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:51:05 vm07 bash[23044]: cluster 2026-03-10T13:51:04.679166+0000 mgr.a (mgr.14388) 176 : cluster [DBG] pgmap v139: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:51:06.247 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:51:05 vm07 bash[23044]: cluster 2026-03-10T13:51:04.679166+0000 mgr.a (mgr.14388) 176 : cluster [DBG] pgmap v139: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:51:06.968 INFO:journalctl@ceph.mgr.a.vm00.stdout:Mar 10 13:51:06 vm00 bash[21015]: ::ffff:192.168.123.107 - - [10/Mar/2026:13:51:06] "GET /metrics HTTP/1.1" 200 21395 "" "Prometheus/2.51.0" 2026-03-10T13:51:08.218 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:51:07 vm00 bash[20748]: cluster 2026-03-10T13:51:06.679407+0000 mgr.a (mgr.14388) 177 : cluster [DBG] pgmap v140: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:51:08.218 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:51:07 vm00 bash[20748]: cluster 2026-03-10T13:51:06.679407+0000 mgr.a (mgr.14388) 177 : cluster [DBG] pgmap v140: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:51:08.247 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:51:07 vm07 bash[23044]: cluster 2026-03-10T13:51:06.679407+0000 mgr.a (mgr.14388) 177 : cluster [DBG] pgmap v140: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:51:08.247 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:51:07 vm07 bash[23044]: cluster 2026-03-10T13:51:06.679407+0000 mgr.a (mgr.14388) 177 : cluster [DBG] pgmap v140: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:51:10.218 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:51:09 vm00 bash[20748]: cluster 2026-03-10T13:51:08.679658+0000 mgr.a (mgr.14388) 178 : cluster [DBG] pgmap v141: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:51:10.218 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:51:09 vm00 bash[20748]: cluster 2026-03-10T13:51:08.679658+0000 mgr.a (mgr.14388) 178 : cluster [DBG] pgmap v141: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:51:10.247 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:51:09 vm07 bash[23044]: cluster 2026-03-10T13:51:08.679658+0000 mgr.a (mgr.14388) 178 : cluster [DBG] pgmap v141: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:51:10.247 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:51:09 vm07 bash[23044]: cluster 2026-03-10T13:51:08.679658+0000 mgr.a (mgr.14388) 178 : cluster [DBG] pgmap v141: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:51:12.218 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:51:11 vm00 bash[20748]: cluster 2026-03-10T13:51:10.679892+0000 mgr.a (mgr.14388) 179 : cluster [DBG] pgmap v142: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:51:12.218 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:51:11 vm00 bash[20748]: cluster 2026-03-10T13:51:10.679892+0000 mgr.a (mgr.14388) 179 : cluster [DBG] pgmap v142: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:51:12.247 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:51:11 vm07 bash[23044]: cluster 2026-03-10T13:51:10.679892+0000 mgr.a (mgr.14388) 179 : cluster [DBG] pgmap v142: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:51:12.247 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:51:11 vm07 bash[23044]: cluster 2026-03-10T13:51:10.679892+0000 mgr.a (mgr.14388) 179 : cluster [DBG] pgmap v142: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:51:14.218 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:51:13 vm00 bash[20748]: cluster 2026-03-10T13:51:12.680116+0000 mgr.a (mgr.14388) 180 : cluster [DBG] pgmap v143: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:51:14.218 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:51:13 vm00 bash[20748]: cluster 2026-03-10T13:51:12.680116+0000 mgr.a (mgr.14388) 180 : cluster [DBG] pgmap v143: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:51:14.247 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:51:13 vm07 bash[23044]: cluster 2026-03-10T13:51:12.680116+0000 mgr.a (mgr.14388) 180 : cluster [DBG] pgmap v143: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:51:14.248 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:51:13 vm07 bash[23044]: cluster 2026-03-10T13:51:12.680116+0000 mgr.a (mgr.14388) 180 : cluster [DBG] pgmap v143: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:51:16.218 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:51:15 vm00 bash[20748]: cluster 2026-03-10T13:51:14.680392+0000 mgr.a (mgr.14388) 181 : cluster [DBG] pgmap v144: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:51:16.218 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:51:15 vm00 bash[20748]: cluster 2026-03-10T13:51:14.680392+0000 mgr.a (mgr.14388) 181 : cluster [DBG] pgmap v144: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:51:16.247 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:51:15 vm07 bash[23044]: cluster 2026-03-10T13:51:14.680392+0000 mgr.a (mgr.14388) 181 : cluster [DBG] pgmap v144: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:51:16.247 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:51:15 vm07 bash[23044]: cluster 2026-03-10T13:51:14.680392+0000 mgr.a (mgr.14388) 181 : cluster [DBG] pgmap v144: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:51:16.968 INFO:journalctl@ceph.mgr.a.vm00.stdout:Mar 10 13:51:16 vm00 bash[21015]: ::ffff:192.168.123.107 - - [10/Mar/2026:13:51:16] "GET /metrics HTTP/1.1" 200 21395 "" "Prometheus/2.51.0" 2026-03-10T13:51:18.218 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:51:17 vm00 bash[20748]: cluster 2026-03-10T13:51:16.680625+0000 mgr.a (mgr.14388) 182 : cluster [DBG] pgmap v145: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:51:18.218 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:51:17 vm00 bash[20748]: cluster 2026-03-10T13:51:16.680625+0000 mgr.a (mgr.14388) 182 : cluster [DBG] pgmap v145: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:51:18.218 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:51:17 vm00 bash[20748]: audit 2026-03-10T13:51:17.696793+0000 mon.a (mon.0) 619 : audit [DBG] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T13:51:18.218 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:51:17 vm00 bash[20748]: audit 2026-03-10T13:51:17.696793+0000 mon.a (mon.0) 619 : audit [DBG] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T13:51:18.247 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:51:17 vm07 bash[23044]: cluster 2026-03-10T13:51:16.680625+0000 mgr.a (mgr.14388) 182 : cluster [DBG] pgmap v145: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:51:18.247 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:51:17 vm07 bash[23044]: cluster 2026-03-10T13:51:16.680625+0000 mgr.a (mgr.14388) 182 : cluster [DBG] pgmap v145: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:51:18.247 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:51:17 vm07 bash[23044]: audit 2026-03-10T13:51:17.696793+0000 mon.a (mon.0) 619 : audit [DBG] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T13:51:18.247 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:51:17 vm07 bash[23044]: audit 2026-03-10T13:51:17.696793+0000 mon.a (mon.0) 619 : audit [DBG] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T13:51:20.218 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:51:19 vm00 bash[20748]: cluster 2026-03-10T13:51:18.680825+0000 mgr.a (mgr.14388) 183 : cluster [DBG] pgmap v146: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:51:20.218 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:51:19 vm00 bash[20748]: cluster 2026-03-10T13:51:18.680825+0000 mgr.a (mgr.14388) 183 : cluster [DBG] pgmap v146: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:51:20.247 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:51:19 vm07 bash[23044]: cluster 2026-03-10T13:51:18.680825+0000 mgr.a (mgr.14388) 183 : cluster [DBG] pgmap v146: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:51:20.247 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:51:19 vm07 bash[23044]: cluster 2026-03-10T13:51:18.680825+0000 mgr.a (mgr.14388) 183 : cluster [DBG] pgmap v146: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:51:22.218 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:51:21 vm00 bash[20748]: cluster 2026-03-10T13:51:20.681073+0000 mgr.a (mgr.14388) 184 : cluster [DBG] pgmap v147: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:51:22.218 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:51:21 vm00 bash[20748]: cluster 2026-03-10T13:51:20.681073+0000 mgr.a (mgr.14388) 184 : cluster [DBG] pgmap v147: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:51:22.247 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:51:21 vm07 bash[23044]: cluster 2026-03-10T13:51:20.681073+0000 mgr.a (mgr.14388) 184 : cluster [DBG] pgmap v147: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:51:22.248 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:51:21 vm07 bash[23044]: cluster 2026-03-10T13:51:20.681073+0000 mgr.a (mgr.14388) 184 : cluster [DBG] pgmap v147: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:51:24.218 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:51:23 vm00 bash[20748]: cluster 2026-03-10T13:51:22.681340+0000 mgr.a (mgr.14388) 185 : cluster [DBG] pgmap v148: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:51:24.218 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:51:23 vm00 bash[20748]: cluster 2026-03-10T13:51:22.681340+0000 mgr.a (mgr.14388) 185 : cluster [DBG] pgmap v148: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:51:24.247 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:51:23 vm07 bash[23044]: cluster 2026-03-10T13:51:22.681340+0000 mgr.a (mgr.14388) 185 : cluster [DBG] pgmap v148: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:51:24.248 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:51:23 vm07 bash[23044]: cluster 2026-03-10T13:51:22.681340+0000 mgr.a (mgr.14388) 185 : cluster [DBG] pgmap v148: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:51:26.218 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:51:25 vm00 bash[20748]: cluster 2026-03-10T13:51:24.681576+0000 mgr.a (mgr.14388) 186 : cluster [DBG] pgmap v149: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:51:26.218 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:51:25 vm00 bash[20748]: cluster 2026-03-10T13:51:24.681576+0000 mgr.a (mgr.14388) 186 : cluster [DBG] pgmap v149: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:51:26.247 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:51:25 vm07 bash[23044]: cluster 2026-03-10T13:51:24.681576+0000 mgr.a (mgr.14388) 186 : cluster [DBG] pgmap v149: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:51:26.247 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:51:25 vm07 bash[23044]: cluster 2026-03-10T13:51:24.681576+0000 mgr.a (mgr.14388) 186 : cluster [DBG] pgmap v149: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:51:26.952 INFO:journalctl@ceph.mgr.a.vm00.stdout:Mar 10 13:51:26 vm00 bash[21015]: ::ffff:192.168.123.107 - - [10/Mar/2026:13:51:26] "GET /metrics HTTP/1.1" 200 21394 "" "Prometheus/2.51.0" 2026-03-10T13:51:27.218 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:51:26 vm00 bash[20748]: audit 2026-03-10T13:51:26.422074+0000 mon.a (mon.0) 620 : audit [DBG] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T13:51:27.218 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:51:26 vm00 bash[20748]: audit 2026-03-10T13:51:26.422074+0000 mon.a (mon.0) 620 : audit [DBG] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T13:51:27.218 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:51:26 vm00 bash[20748]: audit 2026-03-10T13:51:26.761427+0000 mon.a (mon.0) 621 : audit [DBG] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T13:51:27.218 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:51:26 vm00 bash[20748]: audit 2026-03-10T13:51:26.761427+0000 mon.a (mon.0) 621 : audit [DBG] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T13:51:27.218 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:51:26 vm00 bash[20748]: audit 2026-03-10T13:51:26.761950+0000 mon.a (mon.0) 622 : audit [INF] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T13:51:27.218 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:51:26 vm00 bash[20748]: audit 2026-03-10T13:51:26.761950+0000 mon.a (mon.0) 622 : audit [INF] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T13:51:27.218 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:51:26 vm00 bash[20748]: audit 2026-03-10T13:51:26.766862+0000 mon.a (mon.0) 623 : audit [INF] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' 2026-03-10T13:51:27.218 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:51:26 vm00 bash[20748]: audit 2026-03-10T13:51:26.766862+0000 mon.a (mon.0) 623 : audit [INF] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' 2026-03-10T13:51:27.247 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:51:26 vm07 bash[23044]: audit 2026-03-10T13:51:26.422074+0000 mon.a (mon.0) 620 : audit [DBG] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T13:51:27.248 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:51:26 vm07 bash[23044]: audit 2026-03-10T13:51:26.422074+0000 mon.a (mon.0) 620 : audit [DBG] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T13:51:27.248 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:51:26 vm07 bash[23044]: audit 2026-03-10T13:51:26.761427+0000 mon.a (mon.0) 621 : audit [DBG] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T13:51:27.248 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:51:26 vm07 bash[23044]: audit 2026-03-10T13:51:26.761427+0000 mon.a (mon.0) 621 : audit [DBG] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T13:51:27.248 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:51:26 vm07 bash[23044]: audit 2026-03-10T13:51:26.761950+0000 mon.a (mon.0) 622 : audit [INF] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T13:51:27.248 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:51:26 vm07 bash[23044]: audit 2026-03-10T13:51:26.761950+0000 mon.a (mon.0) 622 : audit [INF] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T13:51:27.248 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:51:26 vm07 bash[23044]: audit 2026-03-10T13:51:26.766862+0000 mon.a (mon.0) 623 : audit [INF] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' 2026-03-10T13:51:27.248 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:51:26 vm07 bash[23044]: audit 2026-03-10T13:51:26.766862+0000 mon.a (mon.0) 623 : audit [INF] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' 2026-03-10T13:51:28.218 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:51:27 vm00 bash[20748]: cluster 2026-03-10T13:51:26.681807+0000 mgr.a (mgr.14388) 187 : cluster [DBG] pgmap v150: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:51:28.218 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:51:27 vm00 bash[20748]: cluster 2026-03-10T13:51:26.681807+0000 mgr.a (mgr.14388) 187 : cluster [DBG] pgmap v150: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:51:28.247 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:51:27 vm07 bash[23044]: cluster 2026-03-10T13:51:26.681807+0000 mgr.a (mgr.14388) 187 : cluster [DBG] pgmap v150: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:51:28.247 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:51:27 vm07 bash[23044]: cluster 2026-03-10T13:51:26.681807+0000 mgr.a (mgr.14388) 187 : cluster [DBG] pgmap v150: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:51:30.218 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:51:30 vm00 bash[20748]: cluster 2026-03-10T13:51:28.682013+0000 mgr.a (mgr.14388) 188 : cluster [DBG] pgmap v151: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:51:30.218 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:51:30 vm00 bash[20748]: cluster 2026-03-10T13:51:28.682013+0000 mgr.a (mgr.14388) 188 : cluster [DBG] pgmap v151: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:51:30.497 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:51:30 vm07 bash[23044]: cluster 2026-03-10T13:51:28.682013+0000 mgr.a (mgr.14388) 188 : cluster [DBG] pgmap v151: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:51:30.497 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:51:30 vm07 bash[23044]: cluster 2026-03-10T13:51:28.682013+0000 mgr.a (mgr.14388) 188 : cluster [DBG] pgmap v151: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:51:32.468 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:51:32 vm00 bash[20748]: cluster 2026-03-10T13:51:30.682259+0000 mgr.a (mgr.14388) 189 : cluster [DBG] pgmap v152: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:51:32.468 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:51:32 vm00 bash[20748]: cluster 2026-03-10T13:51:30.682259+0000 mgr.a (mgr.14388) 189 : cluster [DBG] pgmap v152: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:51:32.497 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:51:32 vm07 bash[23044]: cluster 2026-03-10T13:51:30.682259+0000 mgr.a (mgr.14388) 189 : cluster [DBG] pgmap v152: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:51:32.497 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:51:32 vm07 bash[23044]: cluster 2026-03-10T13:51:30.682259+0000 mgr.a (mgr.14388) 189 : cluster [DBG] pgmap v152: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:51:33.468 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:51:33 vm00 bash[20748]: audit 2026-03-10T13:51:32.696819+0000 mon.a (mon.0) 624 : audit [DBG] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T13:51:33.468 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:51:33 vm00 bash[20748]: audit 2026-03-10T13:51:32.696819+0000 mon.a (mon.0) 624 : audit [DBG] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T13:51:33.497 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:51:33 vm07 bash[23044]: audit 2026-03-10T13:51:32.696819+0000 mon.a (mon.0) 624 : audit [DBG] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T13:51:33.497 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:51:33 vm07 bash[23044]: audit 2026-03-10T13:51:32.696819+0000 mon.a (mon.0) 624 : audit [DBG] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T13:51:34.247 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:51:34 vm07 bash[23044]: cluster 2026-03-10T13:51:32.682440+0000 mgr.a (mgr.14388) 190 : cluster [DBG] pgmap v153: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:51:34.247 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:51:34 vm07 bash[23044]: cluster 2026-03-10T13:51:32.682440+0000 mgr.a (mgr.14388) 190 : cluster [DBG] pgmap v153: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:51:34.468 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:51:34 vm00 bash[20748]: cluster 2026-03-10T13:51:32.682440+0000 mgr.a (mgr.14388) 190 : cluster [DBG] pgmap v153: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:51:34.468 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:51:34 vm00 bash[20748]: cluster 2026-03-10T13:51:32.682440+0000 mgr.a (mgr.14388) 190 : cluster [DBG] pgmap v153: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:51:36.468 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:51:36 vm00 bash[20748]: cluster 2026-03-10T13:51:34.682663+0000 mgr.a (mgr.14388) 191 : cluster [DBG] pgmap v154: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:51:36.468 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:51:36 vm00 bash[20748]: cluster 2026-03-10T13:51:34.682663+0000 mgr.a (mgr.14388) 191 : cluster [DBG] pgmap v154: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:51:36.497 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:51:36 vm07 bash[23044]: cluster 2026-03-10T13:51:34.682663+0000 mgr.a (mgr.14388) 191 : cluster [DBG] pgmap v154: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:51:36.497 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:51:36 vm07 bash[23044]: cluster 2026-03-10T13:51:34.682663+0000 mgr.a (mgr.14388) 191 : cluster [DBG] pgmap v154: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:51:36.968 INFO:journalctl@ceph.mgr.a.vm00.stdout:Mar 10 13:51:36 vm00 bash[21015]: ::ffff:192.168.123.107 - - [10/Mar/2026:13:51:36] "GET /metrics HTTP/1.1" 200 21393 "" "Prometheus/2.51.0" 2026-03-10T13:51:38.468 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:51:38 vm00 bash[20748]: cluster 2026-03-10T13:51:36.682916+0000 mgr.a (mgr.14388) 192 : cluster [DBG] pgmap v155: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:51:38.468 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:51:38 vm00 bash[20748]: cluster 2026-03-10T13:51:36.682916+0000 mgr.a (mgr.14388) 192 : cluster [DBG] pgmap v155: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:51:38.497 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:51:38 vm07 bash[23044]: cluster 2026-03-10T13:51:36.682916+0000 mgr.a (mgr.14388) 192 : cluster [DBG] pgmap v155: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:51:38.497 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:51:38 vm07 bash[23044]: cluster 2026-03-10T13:51:36.682916+0000 mgr.a (mgr.14388) 192 : cluster [DBG] pgmap v155: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:51:40.468 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:51:40 vm00 bash[20748]: cluster 2026-03-10T13:51:38.683115+0000 mgr.a (mgr.14388) 193 : cluster [DBG] pgmap v156: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:51:40.468 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:51:40 vm00 bash[20748]: cluster 2026-03-10T13:51:38.683115+0000 mgr.a (mgr.14388) 193 : cluster [DBG] pgmap v156: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:51:40.497 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:51:40 vm07 bash[23044]: cluster 2026-03-10T13:51:38.683115+0000 mgr.a (mgr.14388) 193 : cluster [DBG] pgmap v156: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:51:40.497 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:51:40 vm07 bash[23044]: cluster 2026-03-10T13:51:38.683115+0000 mgr.a (mgr.14388) 193 : cluster [DBG] pgmap v156: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:51:42.468 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:51:42 vm00 bash[20748]: cluster 2026-03-10T13:51:40.683331+0000 mgr.a (mgr.14388) 194 : cluster [DBG] pgmap v157: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:51:42.468 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:51:42 vm00 bash[20748]: cluster 2026-03-10T13:51:40.683331+0000 mgr.a (mgr.14388) 194 : cluster [DBG] pgmap v157: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:51:42.497 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:51:42 vm07 bash[23044]: cluster 2026-03-10T13:51:40.683331+0000 mgr.a (mgr.14388) 194 : cluster [DBG] pgmap v157: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:51:42.497 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:51:42 vm07 bash[23044]: cluster 2026-03-10T13:51:40.683331+0000 mgr.a (mgr.14388) 194 : cluster [DBG] pgmap v157: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:51:44.468 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:51:44 vm00 bash[20748]: cluster 2026-03-10T13:51:42.683545+0000 mgr.a (mgr.14388) 195 : cluster [DBG] pgmap v158: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:51:44.468 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:51:44 vm00 bash[20748]: cluster 2026-03-10T13:51:42.683545+0000 mgr.a (mgr.14388) 195 : cluster [DBG] pgmap v158: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:51:44.497 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:51:44 vm07 bash[23044]: cluster 2026-03-10T13:51:42.683545+0000 mgr.a (mgr.14388) 195 : cluster [DBG] pgmap v158: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:51:44.497 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:51:44 vm07 bash[23044]: cluster 2026-03-10T13:51:42.683545+0000 mgr.a (mgr.14388) 195 : cluster [DBG] pgmap v158: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:51:46.968 INFO:journalctl@ceph.mgr.a.vm00.stdout:Mar 10 13:51:46 vm00 bash[21015]: ::ffff:192.168.123.107 - - [10/Mar/2026:13:51:46] "GET /metrics HTTP/1.1" 200 21393 "" "Prometheus/2.51.0" 2026-03-10T13:51:46.968 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:51:46 vm00 bash[20748]: cluster 2026-03-10T13:51:44.683740+0000 mgr.a (mgr.14388) 196 : cluster [DBG] pgmap v159: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:51:46.968 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:51:46 vm00 bash[20748]: cluster 2026-03-10T13:51:44.683740+0000 mgr.a (mgr.14388) 196 : cluster [DBG] pgmap v159: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:51:46.997 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:51:46 vm07 bash[23044]: cluster 2026-03-10T13:51:44.683740+0000 mgr.a (mgr.14388) 196 : cluster [DBG] pgmap v159: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:51:46.997 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:51:46 vm07 bash[23044]: cluster 2026-03-10T13:51:44.683740+0000 mgr.a (mgr.14388) 196 : cluster [DBG] pgmap v159: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:51:48.968 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:51:48 vm00 bash[20748]: cluster 2026-03-10T13:51:46.683949+0000 mgr.a (mgr.14388) 197 : cluster [DBG] pgmap v160: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:51:48.968 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:51:48 vm00 bash[20748]: cluster 2026-03-10T13:51:46.683949+0000 mgr.a (mgr.14388) 197 : cluster [DBG] pgmap v160: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:51:48.968 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:51:48 vm00 bash[20748]: audit 2026-03-10T13:51:47.697339+0000 mon.a (mon.0) 625 : audit [DBG] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T13:51:48.968 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:51:48 vm00 bash[20748]: audit 2026-03-10T13:51:47.697339+0000 mon.a (mon.0) 625 : audit [DBG] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T13:51:48.997 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:51:48 vm07 bash[23044]: cluster 2026-03-10T13:51:46.683949+0000 mgr.a (mgr.14388) 197 : cluster [DBG] pgmap v160: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:51:48.997 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:51:48 vm07 bash[23044]: cluster 2026-03-10T13:51:46.683949+0000 mgr.a (mgr.14388) 197 : cluster [DBG] pgmap v160: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:51:48.997 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:51:48 vm07 bash[23044]: audit 2026-03-10T13:51:47.697339+0000 mon.a (mon.0) 625 : audit [DBG] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T13:51:48.997 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:51:48 vm07 bash[23044]: audit 2026-03-10T13:51:47.697339+0000 mon.a (mon.0) 625 : audit [DBG] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T13:51:50.968 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:51:50 vm00 bash[20748]: cluster 2026-03-10T13:51:48.684150+0000 mgr.a (mgr.14388) 198 : cluster [DBG] pgmap v161: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:51:50.968 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:51:50 vm00 bash[20748]: cluster 2026-03-10T13:51:48.684150+0000 mgr.a (mgr.14388) 198 : cluster [DBG] pgmap v161: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:51:50.997 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:51:50 vm07 bash[23044]: cluster 2026-03-10T13:51:48.684150+0000 mgr.a (mgr.14388) 198 : cluster [DBG] pgmap v161: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:51:50.997 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:51:50 vm07 bash[23044]: cluster 2026-03-10T13:51:48.684150+0000 mgr.a (mgr.14388) 198 : cluster [DBG] pgmap v161: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:51:52.968 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:51:52 vm00 bash[20748]: cluster 2026-03-10T13:51:50.684346+0000 mgr.a (mgr.14388) 199 : cluster [DBG] pgmap v162: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:51:52.968 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:51:52 vm00 bash[20748]: cluster 2026-03-10T13:51:50.684346+0000 mgr.a (mgr.14388) 199 : cluster [DBG] pgmap v162: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:51:52.997 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:51:52 vm07 bash[23044]: cluster 2026-03-10T13:51:50.684346+0000 mgr.a (mgr.14388) 199 : cluster [DBG] pgmap v162: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:51:52.997 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:51:52 vm07 bash[23044]: cluster 2026-03-10T13:51:50.684346+0000 mgr.a (mgr.14388) 199 : cluster [DBG] pgmap v162: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:51:54.968 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:51:54 vm00 bash[20748]: cluster 2026-03-10T13:51:52.684548+0000 mgr.a (mgr.14388) 200 : cluster [DBG] pgmap v163: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:51:54.968 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:51:54 vm00 bash[20748]: cluster 2026-03-10T13:51:52.684548+0000 mgr.a (mgr.14388) 200 : cluster [DBG] pgmap v163: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:51:54.997 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:51:54 vm07 bash[23044]: cluster 2026-03-10T13:51:52.684548+0000 mgr.a (mgr.14388) 200 : cluster [DBG] pgmap v163: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:51:54.997 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:51:54 vm07 bash[23044]: cluster 2026-03-10T13:51:52.684548+0000 mgr.a (mgr.14388) 200 : cluster [DBG] pgmap v163: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:51:56.968 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:51:56 vm00 bash[20748]: cluster 2026-03-10T13:51:54.684774+0000 mgr.a (mgr.14388) 201 : cluster [DBG] pgmap v164: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:51:56.968 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:51:56 vm00 bash[20748]: cluster 2026-03-10T13:51:54.684774+0000 mgr.a (mgr.14388) 201 : cluster [DBG] pgmap v164: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:51:56.968 INFO:journalctl@ceph.mgr.a.vm00.stdout:Mar 10 13:51:56 vm00 bash[21015]: ::ffff:192.168.123.107 - - [10/Mar/2026:13:51:56] "GET /metrics HTTP/1.1" 200 21395 "" "Prometheus/2.51.0" 2026-03-10T13:51:56.997 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:51:56 vm07 bash[23044]: cluster 2026-03-10T13:51:54.684774+0000 mgr.a (mgr.14388) 201 : cluster [DBG] pgmap v164: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:51:56.997 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:51:56 vm07 bash[23044]: cluster 2026-03-10T13:51:54.684774+0000 mgr.a (mgr.14388) 201 : cluster [DBG] pgmap v164: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:51:58.968 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:51:58 vm00 bash[20748]: cluster 2026-03-10T13:51:56.684998+0000 mgr.a (mgr.14388) 202 : cluster [DBG] pgmap v165: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:51:58.968 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:51:58 vm00 bash[20748]: cluster 2026-03-10T13:51:56.684998+0000 mgr.a (mgr.14388) 202 : cluster [DBG] pgmap v165: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:51:58.997 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:51:58 vm07 bash[23044]: cluster 2026-03-10T13:51:56.684998+0000 mgr.a (mgr.14388) 202 : cluster [DBG] pgmap v165: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:51:58.997 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:51:58 vm07 bash[23044]: cluster 2026-03-10T13:51:56.684998+0000 mgr.a (mgr.14388) 202 : cluster [DBG] pgmap v165: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:52:00.968 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:52:00 vm00 bash[20748]: cluster 2026-03-10T13:51:58.685219+0000 mgr.a (mgr.14388) 203 : cluster [DBG] pgmap v166: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:52:00.968 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:52:00 vm00 bash[20748]: cluster 2026-03-10T13:51:58.685219+0000 mgr.a (mgr.14388) 203 : cluster [DBG] pgmap v166: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:52:00.997 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:52:00 vm07 bash[23044]: cluster 2026-03-10T13:51:58.685219+0000 mgr.a (mgr.14388) 203 : cluster [DBG] pgmap v166: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:52:00.997 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:52:00 vm07 bash[23044]: cluster 2026-03-10T13:51:58.685219+0000 mgr.a (mgr.14388) 203 : cluster [DBG] pgmap v166: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:52:02.265 INFO:teuthology.orchestra.run.vm00.stderr:+ curl -s http://192.168.123.107:9095/api/v1/status/config 2026-03-10T13:52:02.271 INFO:teuthology.orchestra.run.vm00.stderr:+ curl -s http://192.168.123.107:9095/api/v1/status/config 2026-03-10T13:52:02.271 INFO:teuthology.orchestra.run.vm00.stderr:+ jq -e '.status == "success"' 2026-03-10T13:52:02.272 INFO:teuthology.orchestra.run.vm00.stdout:{"status":"success","data":{"yaml":"global:\n scrape_interval: 10s\n scrape_timeout: 10s\n scrape_protocols:\n - OpenMetricsText1.0.0\n - OpenMetricsText0.0.1\n - PrometheusText0.0.4\n evaluation_interval: 10s\n external_labels:\n cluster: c9620084-1c86-11f1-bcc5-e3fb709eab0a\nalerting:\n alertmanagers:\n - follow_redirects: true\n enable_http2: true\n scheme: http\n timeout: 10s\n api_version: v2\n http_sd_configs:\n - follow_redirects: true\n enable_http2: true\n refresh_interval: 1m\n url: http://192.168.123.100:8765/sd/prometheus/sd-config?service=alertmanager\nrule_files:\n- /etc/prometheus/alerting/*\nscrape_configs:\n- job_name: ceph\n honor_labels: true\n honor_timestamps: true\n track_timestamps_staleness: false\n scrape_interval: 10s\n scrape_timeout: 10s\n scrape_protocols:\n - OpenMetricsText1.0.0\n - OpenMetricsText0.0.1\n - PrometheusText0.0.4\n metrics_path: /metrics\n scheme: http\n enable_compression: true\n follow_redirects: true\n enable_http2: true\n relabel_configs:\n - source_labels: [__address__]\n separator: ;\n regex: (.*)\n target_label: cluster\n replacement: c9620084-1c86-11f1-bcc5-e3fb709eab0a\n action: replace\n - source_labels: [instance]\n separator: ;\n regex: (.*)\n target_label: instance\n replacement: ceph_cluster\n action: replace\n http_sd_configs:\n - follow_redirects: true\n enable_http2: true\n refresh_interval: 1m\n url: http://192.168.123.100:8765/sd/prometheus/sd-config?service=mgr-prometheus\n- job_name: node\n honor_timestamps: true\n track_timestamps_staleness: false\n scrape_interval: 10s\n scrape_timeout: 10s\n scrape_protocols:\n - OpenMetricsText1.0.0\n - OpenMetricsText0.0.1\n - PrometheusText0.0.4\n metrics_path: /metrics\n scheme: http\n enable_compression: true\n follow_redirects: true\n enable_http2: true\n relabel_configs:\n - source_labels: [__address__]\n separator: ;\n regex: (.*)\n target_label: cluster\n replacement: c9620084-1c86-11f1-bcc5-e3fb709eab0a\n action: replace\n http_sd_configs:\n - follow_redirects: true\n enable_http2: true\n refresh_interval: 1m\n url: http://192.168.123.100:8765/sd/prometheus/sd-config?service=node-exporter\n- job_name: ceph-exporter\n honor_labels: true\n honor_timestamps: true\n track_timestamps_staleness: false\n scrape_interval: 10s\n scrape_timeout: 10s\n scrape_protocols:\n - OpenMetricsText1.0.0\n - OpenMetricsText0.0.1\n - PrometheusText0.0.4\n metrics_path: /metrics\n scheme: http\n enable_compression: true\n follow_redirects: true\n enable_http2: true\n relabel_configs:\n - source_labels: [__address__]\n separator: ;\n regex: (.*)\n target_label: cluster\n replacement: c9620084-1c86-11f1-bcc5-e3fb709eab0a\n action: replace\n http_sd_configs:\n - follow_redirects: true\n enable_http2: true\n refresh_interval: 1m\n url: http://192.168.123.100:8765/sd/prometheus/sd-config?service=ceph-exporter\n- job_name: nvmeof\n honor_timestamps: true\n track_timestamps_staleness: false\n scrape_interval: 10s\n scrape_timeout: 10s\n scrape_protocols:\n - OpenMetricsText1.0.0\n - OpenMetricsText0.0.1\n - PrometheusText0.0.4\n metrics_path: /metrics\n scheme: http\n enable_compression: true\n follow_redirects: true\n enable_http2: true\n http_sd_configs:\n - follow_redirects: true\n enable_http2: true\n refresh_interval: 1m\n url: http://192.168.123.100:8765/sd/prometheus/sd-config?service=nvmeof\n- job_name: nfs\n honor_timestamps: true\n track_timestamps_staleness: false\n scrape_interval: 10s\n scrape_timeout: 10s\n scrape_protocols:\n - OpenMetricsText1.0.0\n - OpenMetricsText0.0.1\n - PrometheusText0.0.4\n metrics_path: /metrics\n scheme: http\n enable_compression: true\n follow_redirects: true\n enable_http2: true\n http_sd_configs:\n - follow_redirects: true\n enable_http2: true\n refresh_interval: 1m\n url: http://192.168.123.100:8765/sd/prometheus/sd-config?service=nfs\n- job_name: federate\n honor_labels: true\n honor_timestamps: true\n track_timestamps_staleness: false\n params:\n match[]:\n - '{job=\"ceph\"}'\n - '{job=\"node\"}'\n - '{job=\"haproxy\"}'\n - '{job=\"ceph-exporter\"}'\n scrape_interval: 15s\n scrape_timeout: 10s\n scrape_protocols:\n - OpenMetricsText1.0.0\n - OpenMetricsText0.0.1\n - PrometheusText0.0.4\n metrics_path: /federate\n scheme: http\n enable_compression: true\n follow_redirects: true\n enable_http2: true\n static_configs:\n - targets: []\n"}}true 2026-03-10T13:52:02.272 INFO:teuthology.orchestra.run.vm00.stderr:+ curl -s http://192.168.123.107:9095/api/v1/alerts 2026-03-10T13:52:02.275 INFO:teuthology.orchestra.run.vm00.stderr:+ curl -s http://192.168.123.107:9095/api/v1/alerts 2026-03-10T13:52:02.275 INFO:teuthology.orchestra.run.vm00.stderr:+ jq -e '.data | .alerts | .[] | select(.labels | .alertname == "CephMonDown") | .state == "firing"' 2026-03-10T13:52:02.277 INFO:teuthology.orchestra.run.vm00.stdout:{"status":"success","data":{"alerts":[{"labels":{"alertname":"CephHealthWarning","cluster":"c9620084-1c86-11f1-bcc5-e3fb709eab0a","instance":"ceph_cluster","job":"ceph","severity":"warning","type":"ceph_default"},"annotations":{"description":"The cluster state has been HEALTH_WARN for more than 15 minutes. Please check 'ceph health detail' for more information.","summary":"Ceph is in the WARNING state"},"state":"pending","activeAt":"2026-03-10T13:50:43.558815712Z","value":"1e+00"},{"labels":{"alertname":"CephMonDownQuorumAtRisk","oid":"1.3.6.1.4.1.50495.1.2.1.3.1","severity":"critical","type":"ceph_default"},"annotations":{"description":"Quorum requires a majority of monitors (x 2) to be active. Without quorum the cluster will become inoperable, affecting all services and connected clients. The following monitors are down: - mon.c on vm08","documentation":"https://docs.ceph.com/en/latest/rados/operations/health-checks#mon-down","summary":"Monitor quorum is at risk"},"state":"firing","activeAt":"2026-03-10T13:50:42.639590217Z","value":"1e+00"},{"labels":{"alertname":"CephMonDown","severity":"warning","type":"ceph_default"},"annotations":{"description":"You have 1 monitor down. Quorum is still intact, but the loss of an additional monitor will make your cluster inoperable. The following monitors are down: - mon.c on vm08\n","documentation":"https://docs.ceph.com/en/latest/rados/operations/health-checks#mon-down","summary":"One or more monitors down"},"state":"firing","activeAt":"2026-03-10T13:50:42.639590217Z","value":"1e+00"}]}}true 2026-03-10T13:52:02.277 INFO:teuthology.orchestra.run.vm00.stderr:+ curl -s http://192.168.123.108:9093/api/v2/status 2026-03-10T13:52:02.280 INFO:teuthology.orchestra.run.vm00.stdout:{"cluster":{"name":"01KKBZZVTCCADAW9166ATN2VT5","peers":[{"address":"192.168.123.108:9094","name":"01KKBZZVTCCADAW9166ATN2VT5"}],"status":"ready"},"config":{"original":"global:\n resolve_timeout: 5m\n http_config:\n tls_config:\n insecure_skip_verify: true\n follow_redirects: true\n enable_http2: true\n smtp_hello: localhost\n smtp_require_tls: true\n pagerduty_url: https://events.pagerduty.com/v2/enqueue\n opsgenie_api_url: https://api.opsgenie.com/\n wechat_api_url: https://qyapi.weixin.qq.com/cgi-bin/\n victorops_api_url: https://alert.victorops.com/integrations/generic/20131114/alert/\n telegram_api_url: https://api.telegram.org\n webex_api_url: https://webexapis.com/v1/messages\nroute:\n receiver: default\n continue: false\n routes:\n - receiver: ceph-dashboard\n group_by:\n - alertname\n continue: false\n group_wait: 10s\n group_interval: 10s\n repeat_interval: 1h\nreceivers:\n- name: default\n- name: ceph-dashboard\n webhook_configs:\n - send_resolved: true\n http_config:\n tls_config:\n insecure_skip_verify: true\n follow_redirects: true\n enable_http2: true\n url: https://vm00.local:8443/api/prometheus_receiver\n max_alerts: 0\n - send_resolved: true\n http_config:\n tls_config:\n insecure_skip_verify: true\n follow_redirects: true\n enable_http2: true\n url: https://vm07.local:8443/api/prometheus_receiver\n max_alerts: 0\ntemplates: []\n"},"uptime":"2026-03-10T13:46:40.333Z","versionInfo":{"branch":"HEAD","buildDate":"20221222-14:51:36","buildUser":"root@abe866dd5717","goVersion":"go1.19.4","revision":"258fab7cdd551f2cf251ed0348f0ad7289aee789","version":"0.25.0"}} 2026-03-10T13:52:02.280 INFO:teuthology.orchestra.run.vm00.stderr:+ curl -s http://192.168.123.108:9093/api/v2/alerts 2026-03-10T13:52:02.283 INFO:teuthology.orchestra.run.vm00.stdout:[{"annotations":{"description":"Quorum requires a majority of monitors (x 2) to be active. Without quorum the cluster will become inoperable, affecting all services and connected clients. The following monitors are down: - mon.c on vm08","documentation":"https://docs.ceph.com/en/latest/rados/operations/health-checks#mon-down","summary":"Monitor quorum is at risk"},"endsAt":"2026-03-10T13:55:12.639Z","fingerprint":"1eabc5cdf19196cb","receivers":[{"name":"ceph-dashboard"}],"startsAt":"2026-03-10T13:51:12.639Z","status":{"inhibitedBy":[],"silencedBy":[],"state":"active"},"updatedAt":"2026-03-10T13:51:12.640Z","generatorURL":"http://vm07.local:9095/graph?g0.expr=%28%28ceph_health_detail%7Bname%3D%22MON_DOWN%22%7D+%3D%3D+1%29+%2A+on+%28%29+%28count%28ceph_mon_quorum_status+%3D%3D+1%29+%3D%3D+bool+%28floor%28count%28ceph_mon_metadata%29+%2F+2%29+%2B+1%29%29%29+%3D%3D+1\u0026g0.tab=1","labels":{"alertname":"CephMonDownQuorumAtRisk","cluster":"c9620084-1c86-11f1-bcc5-e3fb709eab0a","oid":"1.3.6.1.4.1.50495.1.2.1.3.1","severity":"critical","type":"ceph_default"}},{"annotations":{"description":"You have 1 monitor down. Quorum is still intact, but the loss of an additional monitor will make your cluster inoperable. The following monitors are down: - mon.c on vm08\n","documentation":"https://docs.ceph.com/en/latest/rados/operations/health-checks#mon-down","summary":"One or more monitors down"},"endsAt":"2026-03-10T13:55:12.639Z","fingerprint":"a1b1f7fd44f074f8","receivers":[{"name":"ceph-dashboard"}],"startsAt":"2026-03-10T13:51:12.639Z","status":{"inhibitedBy":[],"silencedBy":[],"state":"active"},"updatedAt":"2026-03-10T13:51:12.640Z","generatorURL":"http://vm07.local:9095/graph?g0.expr=count%28ceph_mon_quorum_status+%3D%3D+0%29+%3C%3D+%28count%28ceph_mon_metadata%29+-+floor%28count%28ceph_mon_metadata%29+%2F+2%29+%2B+1%29\u0026g0.tab=1","labels":{"alertname":"CephMonDown","cluster":"c9620084-1c86-11f1-bcc5-e3fb709eab0a","severity":"warning","type":"ceph_default"}}] 2026-03-10T13:52:02.283 INFO:teuthology.orchestra.run.vm00.stderr:+ curl -s http://192.168.123.108:9093/api/v2/alerts 2026-03-10T13:52:02.283 INFO:teuthology.orchestra.run.vm00.stderr:+ jq -e '.[] | select(.labels | .alertname == "CephMonDown") | .status | .state == "active"' 2026-03-10T13:52:02.286 INFO:teuthology.orchestra.run.vm00.stdout:true 2026-03-10T13:52:02.335 DEBUG:teuthology.run_tasks:Unwinding manager cephadm 2026-03-10T13:52:02.337 INFO:tasks.cephadm:Teardown begin 2026-03-10T13:52:02.337 DEBUG:teuthology.orchestra.run.vm00:> sudo rm -f /etc/ceph/ceph.conf /etc/ceph/ceph.client.admin.keyring 2026-03-10T13:52:02.348 DEBUG:teuthology.orchestra.run.vm07:> sudo rm -f /etc/ceph/ceph.conf /etc/ceph/ceph.client.admin.keyring 2026-03-10T13:52:02.355 DEBUG:teuthology.orchestra.run.vm08:> sudo rm -f /etc/ceph/ceph.conf /etc/ceph/ceph.client.admin.keyring 2026-03-10T13:52:02.362 INFO:tasks.cephadm:Disabling cephadm mgr module 2026-03-10T13:52:02.362 DEBUG:teuthology.orchestra.run.vm00:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid c9620084-1c86-11f1-bcc5-e3fb709eab0a -- ceph mgr module disable cephadm 2026-03-10T13:52:02.637 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:52:02 vm00 bash[20748]: cluster 2026-03-10T13:52:00.685452+0000 mgr.a (mgr.14388) 204 : cluster [DBG] pgmap v167: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:52:02.638 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:52:02 vm00 bash[20748]: cluster 2026-03-10T13:52:00.685452+0000 mgr.a (mgr.14388) 204 : cluster [DBG] pgmap v167: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:52:02.997 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:52:02 vm07 bash[23044]: cluster 2026-03-10T13:52:00.685452+0000 mgr.a (mgr.14388) 204 : cluster [DBG] pgmap v167: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:52:02.997 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:52:02 vm07 bash[23044]: cluster 2026-03-10T13:52:00.685452+0000 mgr.a (mgr.14388) 204 : cluster [DBG] pgmap v167: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:52:03.968 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:52:03 vm00 bash[20748]: audit 2026-03-10T13:52:02.697604+0000 mon.a (mon.0) 626 : audit [DBG] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T13:52:03.968 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:52:03 vm00 bash[20748]: audit 2026-03-10T13:52:02.697604+0000 mon.a (mon.0) 626 : audit [DBG] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T13:52:03.997 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:52:03 vm07 bash[23044]: audit 2026-03-10T13:52:02.697604+0000 mon.a (mon.0) 626 : audit [DBG] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T13:52:03.997 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:52:03 vm07 bash[23044]: audit 2026-03-10T13:52:02.697604+0000 mon.a (mon.0) 626 : audit [DBG] from='mgr.14388 192.168.123.100:0/597011496' entity='mgr.a' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T13:52:04.968 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:52:04 vm00 bash[20748]: cluster 2026-03-10T13:52:02.685676+0000 mgr.a (mgr.14388) 205 : cluster [DBG] pgmap v168: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:52:04.968 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:52:04 vm00 bash[20748]: cluster 2026-03-10T13:52:02.685676+0000 mgr.a (mgr.14388) 205 : cluster [DBG] pgmap v168: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:52:04.997 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:52:04 vm07 bash[23044]: cluster 2026-03-10T13:52:02.685676+0000 mgr.a (mgr.14388) 205 : cluster [DBG] pgmap v168: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:52:04.997 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:52:04 vm07 bash[23044]: cluster 2026-03-10T13:52:02.685676+0000 mgr.a (mgr.14388) 205 : cluster [DBG] pgmap v168: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:52:06.968 INFO:journalctl@ceph.mgr.a.vm00.stdout:Mar 10 13:52:06 vm00 bash[21015]: ::ffff:192.168.123.107 - - [10/Mar/2026:13:52:06] "GET /metrics HTTP/1.1" 200 21392 "" "Prometheus/2.51.0" 2026-03-10T13:52:06.968 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:52:06 vm00 bash[20748]: cluster 2026-03-10T13:52:04.685896+0000 mgr.a (mgr.14388) 206 : cluster [DBG] pgmap v169: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:52:06.968 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:52:06 vm00 bash[20748]: cluster 2026-03-10T13:52:04.685896+0000 mgr.a (mgr.14388) 206 : cluster [DBG] pgmap v169: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:52:06.997 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:52:06 vm07 bash[23044]: cluster 2026-03-10T13:52:04.685896+0000 mgr.a (mgr.14388) 206 : cluster [DBG] pgmap v169: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:52:06.997 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:52:06 vm07 bash[23044]: cluster 2026-03-10T13:52:04.685896+0000 mgr.a (mgr.14388) 206 : cluster [DBG] pgmap v169: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-10T13:52:07.012 INFO:teuthology.orchestra.run.vm00.stderr:Inferring config /var/lib/ceph/c9620084-1c86-11f1-bcc5-e3fb709eab0a/mon.a/config 2026-03-10T13:52:07.161 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T13:52:07.155+0000 7f7b2038f640 -1 auth: error reading file: /etc/ceph/ceph.keyring: bufferlist::read_file(/etc/ceph/ceph.keyring): read error:(21) Is a directory 2026-03-10T13:52:07.161 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T13:52:07.155+0000 7f7b2038f640 -1 auth: failed to load /etc/ceph/ceph.keyring: (21) Is a directory 2026-03-10T13:52:07.162 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T13:52:07.155+0000 7f7b2038f640 -1 auth: error reading file: /etc/ceph/ceph.keyring: bufferlist::read_file(/etc/ceph/ceph.keyring): read error:(21) Is a directory 2026-03-10T13:52:07.162 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T13:52:07.155+0000 7f7b2038f640 -1 auth: failed to load /etc/ceph/ceph.keyring: (21) Is a directory 2026-03-10T13:52:07.162 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T13:52:07.155+0000 7f7b2038f640 -1 auth: error reading file: /etc/ceph/ceph.keyring: bufferlist::read_file(/etc/ceph/ceph.keyring): read error:(21) Is a directory 2026-03-10T13:52:07.162 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T13:52:07.155+0000 7f7b2038f640 -1 auth: failed to load /etc/ceph/ceph.keyring: (21) Is a directory 2026-03-10T13:52:07.162 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-10T13:52:07.155+0000 7f7b2038f640 -1 monclient: keyring not found 2026-03-10T13:52:07.162 INFO:teuthology.orchestra.run.vm00.stderr:[errno 21] error connecting to the cluster 2026-03-10T13:52:07.209 DEBUG:teuthology.orchestra.run:got remote process result: 1 2026-03-10T13:52:07.209 INFO:tasks.cephadm:Cleaning up testdir ceph.* files... 2026-03-10T13:52:07.209 DEBUG:teuthology.orchestra.run.vm00:> rm -f /home/ubuntu/cephtest/seed.ceph.conf /home/ubuntu/cephtest/ceph.pub 2026-03-10T13:52:07.212 DEBUG:teuthology.orchestra.run.vm07:> rm -f /home/ubuntu/cephtest/seed.ceph.conf /home/ubuntu/cephtest/ceph.pub 2026-03-10T13:52:07.215 DEBUG:teuthology.orchestra.run.vm08:> rm -f /home/ubuntu/cephtest/seed.ceph.conf /home/ubuntu/cephtest/ceph.pub 2026-03-10T13:52:07.218 INFO:tasks.cephadm:Stopping all daemons... 2026-03-10T13:52:07.218 INFO:tasks.cephadm.mon.a:Stopping mon.a... 2026-03-10T13:52:07.218 DEBUG:teuthology.orchestra.run.vm00:> sudo systemctl stop ceph-c9620084-1c86-11f1-bcc5-e3fb709eab0a@mon.a 2026-03-10T13:52:07.310 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:52:07 vm00 systemd[1]: Stopping Ceph mon.a for c9620084-1c86-11f1-bcc5-e3fb709eab0a... 2026-03-10T13:52:07.468 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:52:07 vm00 bash[20748]: debug 2026-03-10T13:52:07.299+0000 7f6f2f20b640 -1 received signal: Terminated from /sbin/docker-init -- /usr/bin/ceph-mon -n mon.a -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-stderr=true --default-log-stderr-prefix=debug --default-mon-cluster-log-to-file=false --default-mon-cluster-log-to-stderr=true (PID: 1) UID: 0 2026-03-10T13:52:07.468 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:52:07 vm00 bash[20748]: debug 2026-03-10T13:52:07.299+0000 7f6f2f20b640 -1 mon.a@0(leader) e3 *** Got Signal Terminated *** 2026-03-10T13:52:07.468 INFO:journalctl@ceph.mon.a.vm00.stdout:Mar 10 13:52:07 vm00 bash[38651]: ceph-c9620084-1c86-11f1-bcc5-e3fb709eab0a-mon-a 2026-03-10T13:52:07.468 INFO:journalctl@ceph.mgr.a.vm00.stdout:Mar 10 13:52:07 vm00 bash[21015]: [10/Mar/2026:13:52:07] ENGINE Bus STOPPING 2026-03-10T13:52:07.482 DEBUG:teuthology.orchestra.run.vm00:> sudo pkill -f 'journalctl -f -n 0 -u ceph-c9620084-1c86-11f1-bcc5-e3fb709eab0a@mon.a.service' 2026-03-10T13:52:07.493 DEBUG:teuthology.orchestra.run:got remote process result: None 2026-03-10T13:52:07.493 INFO:tasks.cephadm.mon.a:Stopped mon.a 2026-03-10T13:52:07.493 INFO:tasks.cephadm.mon.c:Stopping mon.b... 2026-03-10T13:52:07.493 DEBUG:teuthology.orchestra.run.vm07:> sudo systemctl stop ceph-c9620084-1c86-11f1-bcc5-e3fb709eab0a@mon.b 2026-03-10T13:52:07.733 INFO:journalctl@ceph.mgr.a.vm00.stdout:Mar 10 13:52:07 vm00 bash[21015]: [10/Mar/2026:13:52:07] ENGINE HTTP Server cherrypy._cpwsgi_server.CPWSGIServer(('::', 9283)) shut down 2026-03-10T13:52:07.733 INFO:journalctl@ceph.mgr.a.vm00.stdout:Mar 10 13:52:07 vm00 bash[21015]: [10/Mar/2026:13:52:07] ENGINE Bus STOPPED 2026-03-10T13:52:07.733 INFO:journalctl@ceph.mgr.a.vm00.stdout:Mar 10 13:52:07 vm00 bash[21015]: [10/Mar/2026:13:52:07] ENGINE Bus STARTING 2026-03-10T13:52:07.798 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:52:07 vm07 systemd[1]: Stopping Ceph mon.b for c9620084-1c86-11f1-bcc5-e3fb709eab0a... 2026-03-10T13:52:07.798 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:52:07 vm07 bash[23044]: debug 2026-03-10T13:52:07.532+0000 7f7b45949640 -1 received signal: Terminated from /sbin/docker-init -- /usr/bin/ceph-mon -n mon.b -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-stderr=true --default-log-stderr-prefix=debug --default-mon-cluster-log-to-file=false --default-mon-cluster-log-to-stderr=true (PID: 1) UID: 0 2026-03-10T13:52:07.798 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:52:07 vm07 bash[23044]: debug 2026-03-10T13:52:07.532+0000 7f7b45949640 -1 mon.b@2(peon) e3 *** Got Signal Terminated *** 2026-03-10T13:52:07.968 INFO:journalctl@ceph.mgr.a.vm00.stdout:Mar 10 13:52:07 vm00 bash[21015]: [10/Mar/2026:13:52:07] ENGINE Serving on http://:::9283 2026-03-10T13:52:07.968 INFO:journalctl@ceph.mgr.a.vm00.stdout:Mar 10 13:52:07 vm00 bash[21015]: [10/Mar/2026:13:52:07] ENGINE Bus STARTED 2026-03-10T13:52:08.144 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 13:52:08 vm07 bash[31030]: ceph-c9620084-1c86-11f1-bcc5-e3fb709eab0a-mon-b 2026-03-10T13:52:08.155 DEBUG:teuthology.orchestra.run.vm07:> sudo pkill -f 'journalctl -f -n 0 -u ceph-c9620084-1c86-11f1-bcc5-e3fb709eab0a@mon.b.service' 2026-03-10T13:52:08.166 DEBUG:teuthology.orchestra.run:got remote process result: None 2026-03-10T13:52:08.166 INFO:tasks.cephadm.mon.c:Stopped mon.b 2026-03-10T13:52:08.166 INFO:tasks.cephadm.mon.c:Stopping mon.c... 2026-03-10T13:52:08.166 DEBUG:teuthology.orchestra.run.vm08:> sudo systemctl stop ceph-c9620084-1c86-11f1-bcc5-e3fb709eab0a@mon.c 2026-03-10T13:52:08.176 DEBUG:teuthology.orchestra.run.vm08:> sudo pkill -f 'journalctl -f -n 0 -u ceph-c9620084-1c86-11f1-bcc5-e3fb709eab0a@mon.c.service' 2026-03-10T13:52:08.232 DEBUG:teuthology.orchestra.run:got remote process result: None 2026-03-10T13:52:08.232 INFO:tasks.cephadm.mon.c:Stopped mon.c 2026-03-10T13:52:08.232 INFO:tasks.cephadm.mgr.a:Stopping mgr.a... 2026-03-10T13:52:08.232 DEBUG:teuthology.orchestra.run.vm00:> sudo systemctl stop ceph-c9620084-1c86-11f1-bcc5-e3fb709eab0a@mgr.a 2026-03-10T13:52:08.311 INFO:journalctl@ceph.mgr.a.vm00.stdout:Mar 10 13:52:08 vm00 systemd[1]: Stopping Ceph mgr.a for c9620084-1c86-11f1-bcc5-e3fb709eab0a... 2026-03-10T13:52:08.408 DEBUG:teuthology.orchestra.run.vm00:> sudo pkill -f 'journalctl -f -n 0 -u ceph-c9620084-1c86-11f1-bcc5-e3fb709eab0a@mgr.a.service' 2026-03-10T13:52:08.419 DEBUG:teuthology.orchestra.run:got remote process result: None 2026-03-10T13:52:08.419 INFO:tasks.cephadm.mgr.a:Stopped mgr.a 2026-03-10T13:52:08.419 INFO:tasks.cephadm.mgr.b:Stopping mgr.b... 2026-03-10T13:52:08.419 DEBUG:teuthology.orchestra.run.vm07:> sudo systemctl stop ceph-c9620084-1c86-11f1-bcc5-e3fb709eab0a@mgr.b 2026-03-10T13:52:08.747 INFO:journalctl@ceph.mgr.b.vm07.stdout:Mar 10 13:52:08 vm07 systemd[1]: Stopping Ceph mgr.b for c9620084-1c86-11f1-bcc5-e3fb709eab0a... 2026-03-10T13:52:08.822 DEBUG:teuthology.orchestra.run.vm07:> sudo pkill -f 'journalctl -f -n 0 -u ceph-c9620084-1c86-11f1-bcc5-e3fb709eab0a@mgr.b.service' 2026-03-10T13:52:08.833 DEBUG:teuthology.orchestra.run:got remote process result: None 2026-03-10T13:52:08.833 INFO:tasks.cephadm.mgr.b:Stopped mgr.b 2026-03-10T13:52:08.833 INFO:tasks.cephadm.osd.0:Stopping osd.0... 2026-03-10T13:52:08.833 DEBUG:teuthology.orchestra.run.vm00:> sudo systemctl stop ceph-c9620084-1c86-11f1-bcc5-e3fb709eab0a@osd.0 2026-03-10T13:52:09.218 INFO:journalctl@ceph.osd.0.vm00.stdout:Mar 10 13:52:08 vm00 systemd[1]: Stopping Ceph osd.0 for c9620084-1c86-11f1-bcc5-e3fb709eab0a... 2026-03-10T13:52:09.218 INFO:journalctl@ceph.osd.0.vm00.stdout:Mar 10 13:52:08 vm00 bash[30637]: debug 2026-03-10T13:52:08.875+0000 7f709e4ed640 -1 received signal: Terminated from /sbin/docker-init -- /usr/bin/ceph-osd -n osd.0 -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-stderr=true --default-log-stderr-prefix=debug (PID: 1) UID: 0 2026-03-10T13:52:09.218 INFO:journalctl@ceph.osd.0.vm00.stdout:Mar 10 13:52:08 vm00 bash[30637]: debug 2026-03-10T13:52:08.875+0000 7f709e4ed640 -1 osd.0 23 *** Got signal Terminated *** 2026-03-10T13:52:09.218 INFO:journalctl@ceph.osd.0.vm00.stdout:Mar 10 13:52:08 vm00 bash[30637]: debug 2026-03-10T13:52:08.875+0000 7f709e4ed640 -1 osd.0 23 *** Immediate shutdown (osd_fast_shutdown=true) *** 2026-03-10T13:52:14.218 INFO:journalctl@ceph.osd.0.vm00.stdout:Mar 10 13:52:13 vm00 bash[38837]: ceph-c9620084-1c86-11f1-bcc5-e3fb709eab0a-osd-0 2026-03-10T13:52:14.260 DEBUG:teuthology.orchestra.run.vm00:> sudo pkill -f 'journalctl -f -n 0 -u ceph-c9620084-1c86-11f1-bcc5-e3fb709eab0a@osd.0.service' 2026-03-10T13:52:14.284 DEBUG:teuthology.orchestra.run:got remote process result: None 2026-03-10T13:52:14.284 INFO:tasks.cephadm.osd.0:Stopped osd.0 2026-03-10T13:52:14.285 INFO:tasks.cephadm.osd.1:Stopping osd.1... 2026-03-10T13:52:14.285 DEBUG:teuthology.orchestra.run.vm07:> sudo systemctl stop ceph-c9620084-1c86-11f1-bcc5-e3fb709eab0a@osd.1 2026-03-10T13:52:14.747 INFO:journalctl@ceph.osd.1.vm07.stdout:Mar 10 13:52:14 vm07 systemd[1]: Stopping Ceph osd.1 for c9620084-1c86-11f1-bcc5-e3fb709eab0a... 2026-03-10T13:52:14.748 INFO:journalctl@ceph.osd.1.vm07.stdout:Mar 10 13:52:14 vm07 bash[25998]: debug 2026-03-10T13:52:14.328+0000 7f94b2930640 -1 received signal: Terminated from /sbin/docker-init -- /usr/bin/ceph-osd -n osd.1 -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-stderr=true --default-log-stderr-prefix=debug (PID: 1) UID: 0 2026-03-10T13:52:14.748 INFO:journalctl@ceph.osd.1.vm07.stdout:Mar 10 13:52:14 vm07 bash[25998]: debug 2026-03-10T13:52:14.328+0000 7f94b2930640 -1 osd.1 23 *** Got signal Terminated *** 2026-03-10T13:52:14.748 INFO:journalctl@ceph.osd.1.vm07.stdout:Mar 10 13:52:14 vm07 bash[25998]: debug 2026-03-10T13:52:14.328+0000 7f94b2930640 -1 osd.1 23 *** Immediate shutdown (osd_fast_shutdown=true) *** 2026-03-10T13:52:19.747 INFO:journalctl@ceph.osd.1.vm07.stdout:Mar 10 13:52:19 vm07 bash[31207]: ceph-c9620084-1c86-11f1-bcc5-e3fb709eab0a-osd-1 2026-03-10T13:52:19.805 DEBUG:teuthology.orchestra.run.vm07:> sudo pkill -f 'journalctl -f -n 0 -u ceph-c9620084-1c86-11f1-bcc5-e3fb709eab0a@osd.1.service' 2026-03-10T13:52:19.823 DEBUG:teuthology.orchestra.run:got remote process result: None 2026-03-10T13:52:19.823 INFO:tasks.cephadm.osd.1:Stopped osd.1 2026-03-10T13:52:19.823 INFO:tasks.cephadm.osd.2:Stopping osd.2... 2026-03-10T13:52:19.823 DEBUG:teuthology.orchestra.run.vm08:> sudo systemctl stop ceph-c9620084-1c86-11f1-bcc5-e3fb709eab0a@osd.2 2026-03-10T13:52:20.089 INFO:journalctl@ceph.osd.2.vm08.stdout:Mar 10 13:52:19 vm08 systemd[1]: Stopping Ceph osd.2 for c9620084-1c86-11f1-bcc5-e3fb709eab0a... 2026-03-10T13:52:20.089 INFO:journalctl@ceph.osd.2.vm08.stdout:Mar 10 13:52:19 vm08 bash[26271]: debug 2026-03-10T13:52:19.857+0000 7f8635ad0640 -1 received signal: Terminated from /sbin/docker-init -- /usr/bin/ceph-osd -n osd.2 -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-stderr=true --default-log-stderr-prefix=debug (PID: 1) UID: 0 2026-03-10T13:52:20.089 INFO:journalctl@ceph.osd.2.vm08.stdout:Mar 10 13:52:19 vm08 bash[26271]: debug 2026-03-10T13:52:19.857+0000 7f8635ad0640 -1 osd.2 23 *** Got signal Terminated *** 2026-03-10T13:52:20.089 INFO:journalctl@ceph.osd.2.vm08.stdout:Mar 10 13:52:19 vm08 bash[26271]: debug 2026-03-10T13:52:19.857+0000 7f8635ad0640 -1 osd.2 23 *** Immediate shutdown (osd_fast_shutdown=true) *** 2026-03-10T13:52:25.187 INFO:journalctl@ceph.osd.2.vm08.stdout:Mar 10 13:52:24 vm08 bash[31524]: ceph-c9620084-1c86-11f1-bcc5-e3fb709eab0a-osd-2 2026-03-10T13:52:25.225 DEBUG:teuthology.orchestra.run.vm08:> sudo pkill -f 'journalctl -f -n 0 -u ceph-c9620084-1c86-11f1-bcc5-e3fb709eab0a@osd.2.service' 2026-03-10T13:52:25.241 DEBUG:teuthology.orchestra.run:got remote process result: None 2026-03-10T13:52:25.241 INFO:tasks.cephadm.osd.2:Stopped osd.2 2026-03-10T13:52:25.241 DEBUG:teuthology.orchestra.run.vm00:> sudo /home/ubuntu/cephtest/cephadm rm-cluster --fsid c9620084-1c86-11f1-bcc5-e3fb709eab0a --force --keep-logs 2026-03-10T13:52:25.333 INFO:teuthology.orchestra.run.vm00.stdout:Deleting cluster with fsid: c9620084-1c86-11f1-bcc5-e3fb709eab0a 2026-03-10T13:52:31.481 DEBUG:teuthology.orchestra.run.vm07:> sudo /home/ubuntu/cephtest/cephadm rm-cluster --fsid c9620084-1c86-11f1-bcc5-e3fb709eab0a --force --keep-logs 2026-03-10T13:52:31.571 INFO:teuthology.orchestra.run.vm07.stdout:Deleting cluster with fsid: c9620084-1c86-11f1-bcc5-e3fb709eab0a 2026-03-10T13:52:37.765 DEBUG:teuthology.orchestra.run.vm08:> sudo /home/ubuntu/cephtest/cephadm rm-cluster --fsid c9620084-1c86-11f1-bcc5-e3fb709eab0a --force --keep-logs 2026-03-10T13:52:37.852 INFO:teuthology.orchestra.run.vm08.stdout:Deleting cluster with fsid: c9620084-1c86-11f1-bcc5-e3fb709eab0a 2026-03-10T13:52:43.831 DEBUG:teuthology.orchestra.run.vm00:> sudo rm -f /etc/ceph/ceph.conf /etc/ceph/ceph.client.admin.keyring 2026-03-10T13:52:43.838 INFO:teuthology.orchestra.run.vm00.stderr:rm: cannot remove '/etc/ceph/ceph.client.admin.keyring': Is a directory 2026-03-10T13:52:43.839 DEBUG:teuthology.orchestra.run:got remote process result: 1 2026-03-10T13:52:43.839 DEBUG:teuthology.orchestra.run.vm07:> sudo rm -f /etc/ceph/ceph.conf /etc/ceph/ceph.client.admin.keyring 2026-03-10T13:52:43.846 DEBUG:teuthology.orchestra.run.vm08:> sudo rm -f /etc/ceph/ceph.conf /etc/ceph/ceph.client.admin.keyring 2026-03-10T13:52:43.852 INFO:tasks.cephadm:Archiving crash dumps... 2026-03-10T13:52:43.853 DEBUG:teuthology.misc:Transferring archived files from vm00:/var/lib/ceph/c9620084-1c86-11f1-bcc5-e3fb709eab0a/crash to /archive/kyr-2026-03-10_01:00:38-orch-squid-none-default-vps/1053/remote/vm00/crash 2026-03-10T13:52:43.853 DEBUG:teuthology.orchestra.run.vm00:> sudo tar c -f - -C /var/lib/ceph/c9620084-1c86-11f1-bcc5-e3fb709eab0a/crash -- . 2026-03-10T13:52:43.885 INFO:teuthology.orchestra.run.vm00.stderr:tar: /var/lib/ceph/c9620084-1c86-11f1-bcc5-e3fb709eab0a/crash: Cannot open: No such file or directory 2026-03-10T13:52:43.885 INFO:teuthology.orchestra.run.vm00.stderr:tar: Error is not recoverable: exiting now 2026-03-10T13:52:43.885 DEBUG:teuthology.misc:Transferring archived files from vm07:/var/lib/ceph/c9620084-1c86-11f1-bcc5-e3fb709eab0a/crash to /archive/kyr-2026-03-10_01:00:38-orch-squid-none-default-vps/1053/remote/vm07/crash 2026-03-10T13:52:43.885 DEBUG:teuthology.orchestra.run.vm07:> sudo tar c -f - -C /var/lib/ceph/c9620084-1c86-11f1-bcc5-e3fb709eab0a/crash -- . 2026-03-10T13:52:43.892 INFO:teuthology.orchestra.run.vm07.stderr:tar: /var/lib/ceph/c9620084-1c86-11f1-bcc5-e3fb709eab0a/crash: Cannot open: No such file or directory 2026-03-10T13:52:43.892 INFO:teuthology.orchestra.run.vm07.stderr:tar: Error is not recoverable: exiting now 2026-03-10T13:52:43.893 DEBUG:teuthology.misc:Transferring archived files from vm08:/var/lib/ceph/c9620084-1c86-11f1-bcc5-e3fb709eab0a/crash to /archive/kyr-2026-03-10_01:00:38-orch-squid-none-default-vps/1053/remote/vm08/crash 2026-03-10T13:52:43.893 DEBUG:teuthology.orchestra.run.vm08:> sudo tar c -f - -C /var/lib/ceph/c9620084-1c86-11f1-bcc5-e3fb709eab0a/crash -- . 2026-03-10T13:52:43.902 INFO:teuthology.orchestra.run.vm08.stderr:tar: /var/lib/ceph/c9620084-1c86-11f1-bcc5-e3fb709eab0a/crash: Cannot open: No such file or directory 2026-03-10T13:52:43.902 INFO:teuthology.orchestra.run.vm08.stderr:tar: Error is not recoverable: exiting now 2026-03-10T13:52:43.903 INFO:tasks.cephadm:Checking cluster log for badness... 2026-03-10T13:52:43.903 DEBUG:teuthology.orchestra.run.vm00:> sudo egrep '\[ERR\]|\[WRN\]|\[SEC\]' /var/log/ceph/c9620084-1c86-11f1-bcc5-e3fb709eab0a/ceph.log | egrep CEPHADM_ | egrep -v '\(MDS_ALL_DOWN\)' | egrep -v '\(MDS_UP_LESS_THAN_MAX\)' | egrep -v MON_DOWN | egrep -v 'mons down' | egrep -v 'mon down' | egrep -v 'out of quorum' | egrep -v CEPHADM_STRAY_DAEMON | egrep -v CEPHADM_FAILED_DAEMON | head -n 1 2026-03-10T13:52:43.939 INFO:tasks.cephadm:Compressing logs... 2026-03-10T13:52:43.939 DEBUG:teuthology.orchestra.run.vm00:> time sudo find /var/log/ceph /var/log/rbd-target-api -name '*.log' -print0 | sudo xargs --max-args=1 --max-procs=0 --verbose -0 --no-run-if-empty -- gzip -5 --verbose -- 2026-03-10T13:52:43.980 DEBUG:teuthology.orchestra.run.vm07:> time sudo find /var/log/ceph /var/log/rbd-target-api -name '*.log' -print0 | sudo xargs --max-args=1 --max-procs=0 --verbose -0 --no-run-if-empty -- gzip -5 --verbose -- 2026-03-10T13:52:43.981 DEBUG:teuthology.orchestra.run.vm08:> time sudo find /var/log/ceph /var/log/rbd-target-api -name '*.log' -print0 | sudo xargs --max-args=1 --max-procs=0 --verbose -0 --no-run-if-empty -- gzip -5 --verbose -- 2026-03-10T13:52:43.986 INFO:teuthology.orchestra.run.vm00.stderr:find: ‘/var/log/rbd-target-api’: No such file or directory 2026-03-10T13:52:43.987 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/cephadm.log 2026-03-10T13:52:43.987 INFO:teuthology.orchestra.run.vm07.stderr:find: ‘/var/log/rbd-target-api’: No such file or directory 2026-03-10T13:52:43.987 INFO:teuthology.orchestra.run.vm07.stderr:gzip -5 --verbose -- /var/log/ceph/cephadm.log 2026-03-10T13:52:43.987 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/c9620084-1c86-11f1-bcc5-e3fb709eab0a/ceph-mgr.a.log 2026-03-10T13:52:43.988 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/c9620084-1c86-11f1-bcc5-e3fb709eab0a/ceph.log 2026-03-10T13:52:43.988 INFO:teuthology.orchestra.run.vm07.stderr:gzip -5 --verbose -- /var/log/ceph/c9620084-1c86-11f1-bcc5-e3fb709eab0a/ceph.log 2026-03-10T13:52:43.988 INFO:teuthology.orchestra.run.vm07.stderr:/var/log/ceph/cephadm.log: gzip -5 --verbose -- /var/log/ceph/c9620084-1c86-11f1-bcc5-e3fb709eab0a/ceph-mon.b.log 2026-03-10T13:52:43.989 INFO:teuthology.orchestra.run.vm08.stderr:find: ‘/var/log/rbd-target-api’: No such file or directory 2026-03-10T13:52:43.989 INFO:teuthology.orchestra.run.vm07.stderr:/var/log/ceph/c9620084-1c86-11f1-bcc5-e3fb709eab0a/ceph.log: 87.9% -- replaced with /var/log/ceph/c9620084-1c86-11f1-bcc5-e3fb709eab0a/ceph.log.gz 2026-03-10T13:52:43.990 INFO:teuthology.orchestra.run.vm08.stderr:gzip -5 --verbose -- /var/log/ceph/cephadm.log 2026-03-10T13:52:43.990 INFO:teuthology.orchestra.run.vm07.stderr:gzip -5 --verbose -- /var/log/ceph/c9620084-1c86-11f1-bcc5-e3fb709eab0a/ceph-osd.1.log 2026-03-10T13:52:43.990 INFO:teuthology.orchestra.run.vm08.stderr:gzip -5 --verbose -- /var/log/ceph/c9620084-1c86-11f1-bcc5-e3fb709eab0a/ceph.log 2026-03-10T13:52:43.990 INFO:teuthology.orchestra.run.vm08.stderr:/var/log/ceph/cephadm.log: gzip -5 --verbose -- /var/log/ceph/c9620084-1c86-11f1-bcc5-e3fb709eab0a/ceph-mon.c.log 2026-03-10T13:52:43.990 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/cephadm.log: /var/log/ceph/c9620084-1c86-11f1-bcc5-e3fb709eab0a/ceph-mgr.a.log: gzip -5 --verbose -- /var/log/ceph/c9620084-1c86-11f1-bcc5-e3fb709eab0a/ceph-mon.a.log 2026-03-10T13:52:43.990 INFO:teuthology.orchestra.run.vm07.stderr:/var/log/ceph/c9620084-1c86-11f1-bcc5-e3fb709eab0a/ceph-mon.b.log: 84.9% -- replaced with /var/log/ceph/cephadm.log.gz 2026-03-10T13:52:43.991 INFO:teuthology.orchestra.run.vm07.stderr:gzip -5 --verbose -- /var/log/ceph/c9620084-1c86-11f1-bcc5-e3fb709eab0a/ceph-mgr.b.log 2026-03-10T13:52:43.991 INFO:teuthology.orchestra.run.vm08.stderr:/var/log/ceph/c9620084-1c86-11f1-bcc5-e3fb709eab0a/ceph.log: 87.9% -- replaced with /var/log/ceph/c9620084-1c86-11f1-bcc5-e3fb709eab0a/ceph.log.gz 2026-03-10T13:52:43.991 INFO:teuthology.orchestra.run.vm08.stderr: 88.6% -- replaced with /var/log/ceph/cephadm.log.gz 2026-03-10T13:52:43.991 INFO:teuthology.orchestra.run.vm08.stderr:gzip -5 --verbose -- /var/log/ceph/c9620084-1c86-11f1-bcc5-e3fb709eab0a/ceph-osd.2.log 2026-03-10T13:52:43.991 INFO:teuthology.orchestra.run.vm08.stderr:gzip -5 --verbose -- /var/log/ceph/c9620084-1c86-11f1-bcc5-e3fb709eab0a/ceph.audit.log 2026-03-10T13:52:43.996 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/c9620084-1c86-11f1-bcc5-e3fb709eab0a/ceph.log: 87.9% -- replaced with /var/log/ceph/c9620084-1c86-11f1-bcc5-e3fb709eab0a/ceph.log.gz 2026-03-10T13:52:43.996 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/c9620084-1c86-11f1-bcc5-e3fb709eab0a/ceph.audit.log 2026-03-10T13:52:43.999 INFO:teuthology.orchestra.run.vm07.stderr:/var/log/ceph/c9620084-1c86-11f1-bcc5-e3fb709eab0a/ceph-osd.1.log: gzip -5 --verbose -- /var/log/ceph/c9620084-1c86-11f1-bcc5-e3fb709eab0a/ceph.audit.log 2026-03-10T13:52:44.000 INFO:teuthology.orchestra.run.vm08.stderr:/var/log/ceph/c9620084-1c86-11f1-bcc5-e3fb709eab0a/ceph-mon.c.log: /var/log/ceph/c9620084-1c86-11f1-bcc5-e3fb709eab0a/ceph-osd.2.log: gzip -5 --verbose -- /var/log/ceph/c9620084-1c86-11f1-bcc5-e3fb709eab0a/ceph-volume.log 2026-03-10T13:52:44.001 INFO:teuthology.orchestra.run.vm08.stderr:/var/log/ceph/c9620084-1c86-11f1-bcc5-e3fb709eab0a/ceph.audit.log: 90.2% -- replaced with /var/log/ceph/c9620084-1c86-11f1-bcc5-e3fb709eab0a/ceph.audit.log.gz 2026-03-10T13:52:44.004 INFO:teuthology.orchestra.run.vm08.stderr:gzip -5 --verbose -- /var/log/ceph/c9620084-1c86-11f1-bcc5-e3fb709eab0a/ceph.cephadm.log 2026-03-10T13:52:44.007 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/c9620084-1c86-11f1-bcc5-e3fb709eab0a/ceph-mon.a.log: gzip -5 --verbose -- /var/log/ceph/c9620084-1c86-11f1-bcc5-e3fb709eab0a/ceph-volume.log 2026-03-10T13:52:44.008 INFO:teuthology.orchestra.run.vm08.stderr:/var/log/ceph/c9620084-1c86-11f1-bcc5-e3fb709eab0a/ceph-volume.log: 95.8% -- replaced with /var/log/ceph/c9620084-1c86-11f1-bcc5-e3fb709eab0a/ceph-volume.log.gz 2026-03-10T13:52:44.011 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/c9620084-1c86-11f1-bcc5-e3fb709eab0a/ceph.audit.log: 92.4% 90.1% -- replaced with /var/log/ceph/c9620084-1c86-11f1-bcc5-e3fb709eab0a/ceph.audit.log.gz 2026-03-10T13:52:44.011 INFO:teuthology.orchestra.run.vm07.stderr:/var/log/ceph/c9620084-1c86-11f1-bcc5-e3fb709eab0a/ceph-mgr.b.log: 91.1%gzip -5 --verbose -- /var/log/ceph/c9620084-1c86-11f1-bcc5-e3fb709eab0a/ceph-volume.log 2026-03-10T13:52:44.011 INFO:teuthology.orchestra.run.vm00.stderr: -- replaced with /var/log/ceph/cephadm.log.gz 2026-03-10T13:52:44.011 INFO:teuthology.orchestra.run.vm07.stderr:/var/log/ceph/c9620084-1c86-11f1-bcc5-e3fb709eab0a/ceph.audit.log: -- replaced with /var/log/ceph/c9620084-1c86-11f1-bcc5-e3fb709eab0a/ceph-mgr.b.log.gz 2026-03-10T13:52:44.011 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /var/log/ceph/c9620084-1c86-11f1-bcc5-e3fb709eab0a/ceph.cephadm.log 2026-03-10T13:52:44.012 INFO:teuthology.orchestra.run.vm07.stderr: 90.3% -- replaced with /var/log/ceph/c9620084-1c86-11f1-bcc5-e3fb709eab0a/ceph.audit.log.gz 2026-03-10T13:52:44.012 INFO:teuthology.orchestra.run.vm07.stderr:gzip -5 --verbose -- /var/log/ceph/c9620084-1c86-11f1-bcc5-e3fb709eab0a/ceph.cephadm.log 2026-03-10T13:52:44.015 INFO:teuthology.orchestra.run.vm07.stderr:/var/log/ceph/c9620084-1c86-11f1-bcc5-e3fb709eab0a/ceph-volume.log: 95.8% -- replaced with /var/log/ceph/c9620084-1c86-11f1-bcc5-e3fb709eab0a/ceph-volume.log.gz 2026-03-10T13:52:44.016 INFO:teuthology.orchestra.run.vm07.stderr:/var/log/ceph/c9620084-1c86-11f1-bcc5-e3fb709eab0a/ceph.cephadm.log: 80.3% -- replaced with /var/log/ceph/c9620084-1c86-11f1-bcc5-e3fb709eab0a/ceph.cephadm.log.gz 2026-03-10T13:52:44.016 INFO:teuthology.orchestra.run.vm08.stderr:/var/log/ceph/c9620084-1c86-11f1-bcc5-e3fb709eab0a/ceph.cephadm.log: 80.3% -- replaced with /var/log/ceph/c9620084-1c86-11f1-bcc5-e3fb709eab0a/ceph.cephadm.log.gz 2026-03-10T13:52:44.019 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/c9620084-1c86-11f1-bcc5-e3fb709eab0a/ceph-volume.log: gzip -5 --verbose -- /var/log/ceph/c9620084-1c86-11f1-bcc5-e3fb709eab0a/ceph-osd.0.log 2026-03-10T13:52:44.020 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/c9620084-1c86-11f1-bcc5-e3fb709eab0a/ceph.cephadm.log: 82.3% -- replaced with /var/log/ceph/c9620084-1c86-11f1-bcc5-e3fb709eab0a/ceph.cephadm.log.gz 2026-03-10T13:52:44.021 INFO:teuthology.orchestra.run.vm08.stderr: 93.3% -- replaced with /var/log/ceph/c9620084-1c86-11f1-bcc5-e3fb709eab0a/ceph-osd.2.log.gz 2026-03-10T13:52:44.023 INFO:teuthology.orchestra.run.vm00.stderr:/var/log/ceph/c9620084-1c86-11f1-bcc5-e3fb709eab0a/ceph-osd.0.log: 95.8% -- replaced with /var/log/ceph/c9620084-1c86-11f1-bcc5-e3fb709eab0a/ceph-volume.log.gz 2026-03-10T13:52:44.037 INFO:teuthology.orchestra.run.vm07.stderr: 93.5% -- replaced with /var/log/ceph/c9620084-1c86-11f1-bcc5-e3fb709eab0a/ceph-osd.1.log.gz 2026-03-10T13:52:44.040 INFO:teuthology.orchestra.run.vm08.stderr: 93.2% -- replaced with /var/log/ceph/c9620084-1c86-11f1-bcc5-e3fb709eab0a/ceph-mon.c.log.gz 2026-03-10T13:52:44.041 INFO:teuthology.orchestra.run.vm08.stderr: 2026-03-10T13:52:44.041 INFO:teuthology.orchestra.run.vm08.stderr:real 0m0.057s 2026-03-10T13:52:44.041 INFO:teuthology.orchestra.run.vm08.stderr:user 0m0.072s 2026-03-10T13:52:44.041 INFO:teuthology.orchestra.run.vm08.stderr:sys 0m0.019s 2026-03-10T13:52:44.059 INFO:teuthology.orchestra.run.vm07.stderr: 92.9% -- replaced with /var/log/ceph/c9620084-1c86-11f1-bcc5-e3fb709eab0a/ceph-mon.b.log.gz 2026-03-10T13:52:44.060 INFO:teuthology.orchestra.run.vm07.stderr: 2026-03-10T13:52:44.060 INFO:teuthology.orchestra.run.vm07.stderr:real 0m0.078s 2026-03-10T13:52:44.060 INFO:teuthology.orchestra.run.vm07.stderr:user 0m0.117s 2026-03-10T13:52:44.060 INFO:teuthology.orchestra.run.vm07.stderr:sys 0m0.012s 2026-03-10T13:52:44.065 INFO:teuthology.orchestra.run.vm00.stderr: 90.8% -- replaced with /var/log/ceph/c9620084-1c86-11f1-bcc5-e3fb709eab0a/ceph-mgr.a.log.gz 2026-03-10T13:52:44.067 INFO:teuthology.orchestra.run.vm00.stderr: 93.3% -- replaced with /var/log/ceph/c9620084-1c86-11f1-bcc5-e3fb709eab0a/ceph-osd.0.log.gz 2026-03-10T13:52:44.207 INFO:teuthology.orchestra.run.vm00.stderr: 91.2% -- replaced with /var/log/ceph/c9620084-1c86-11f1-bcc5-e3fb709eab0a/ceph-mon.a.log.gz 2026-03-10T13:52:44.209 INFO:teuthology.orchestra.run.vm00.stderr: 2026-03-10T13:52:44.209 INFO:teuthology.orchestra.run.vm00.stderr:real 0m0.227s 2026-03-10T13:52:44.209 INFO:teuthology.orchestra.run.vm00.stderr:user 0m0.285s 2026-03-10T13:52:44.209 INFO:teuthology.orchestra.run.vm00.stderr:sys 0m0.025s 2026-03-10T13:52:44.209 INFO:tasks.cephadm:Archiving logs... 2026-03-10T13:52:44.209 DEBUG:teuthology.misc:Transferring archived files from vm00:/var/log/ceph to /archive/kyr-2026-03-10_01:00:38-orch-squid-none-default-vps/1053/remote/vm00/log 2026-03-10T13:52:44.209 DEBUG:teuthology.orchestra.run.vm00:> sudo tar c -f - -C /var/log/ceph -- . 2026-03-10T13:52:44.279 DEBUG:teuthology.misc:Transferring archived files from vm07:/var/log/ceph to /archive/kyr-2026-03-10_01:00:38-orch-squid-none-default-vps/1053/remote/vm07/log 2026-03-10T13:52:44.279 DEBUG:teuthology.orchestra.run.vm07:> sudo tar c -f - -C /var/log/ceph -- . 2026-03-10T13:52:44.291 DEBUG:teuthology.misc:Transferring archived files from vm08:/var/log/ceph to /archive/kyr-2026-03-10_01:00:38-orch-squid-none-default-vps/1053/remote/vm08/log 2026-03-10T13:52:44.291 DEBUG:teuthology.orchestra.run.vm08:> sudo tar c -f - -C /var/log/ceph -- . 2026-03-10T13:52:44.306 INFO:tasks.cephadm:Removing cluster... 2026-03-10T13:52:44.306 DEBUG:teuthology.orchestra.run.vm00:> sudo /home/ubuntu/cephtest/cephadm rm-cluster --fsid c9620084-1c86-11f1-bcc5-e3fb709eab0a --force 2026-03-10T13:52:44.418 INFO:teuthology.orchestra.run.vm00.stdout:Deleting cluster with fsid: c9620084-1c86-11f1-bcc5-e3fb709eab0a 2026-03-10T13:52:45.687 DEBUG:teuthology.orchestra.run.vm07:> sudo /home/ubuntu/cephtest/cephadm rm-cluster --fsid c9620084-1c86-11f1-bcc5-e3fb709eab0a --force 2026-03-10T13:52:45.779 INFO:teuthology.orchestra.run.vm07.stdout:Deleting cluster with fsid: c9620084-1c86-11f1-bcc5-e3fb709eab0a 2026-03-10T13:52:47.005 DEBUG:teuthology.orchestra.run.vm08:> sudo /home/ubuntu/cephtest/cephadm rm-cluster --fsid c9620084-1c86-11f1-bcc5-e3fb709eab0a --force 2026-03-10T13:52:47.094 INFO:teuthology.orchestra.run.vm08.stdout:Deleting cluster with fsid: c9620084-1c86-11f1-bcc5-e3fb709eab0a 2026-03-10T13:52:48.366 INFO:tasks.cephadm:Removing cephadm ... 2026-03-10T13:52:48.366 DEBUG:teuthology.orchestra.run.vm00:> rm -rf /home/ubuntu/cephtest/cephadm 2026-03-10T13:52:48.369 DEBUG:teuthology.orchestra.run.vm07:> rm -rf /home/ubuntu/cephtest/cephadm 2026-03-10T13:52:48.373 DEBUG:teuthology.orchestra.run.vm08:> rm -rf /home/ubuntu/cephtest/cephadm 2026-03-10T13:52:48.376 INFO:tasks.cephadm:Teardown complete 2026-03-10T13:52:48.376 DEBUG:teuthology.run_tasks:Unwinding manager install 2026-03-10T13:52:48.378 INFO:teuthology.task.install.util:Removing shipped files: /home/ubuntu/cephtest/valgrind.supp /usr/bin/daemon-helper /usr/bin/adjust-ulimits /usr/bin/stdin-killer... 2026-03-10T13:52:48.378 DEBUG:teuthology.orchestra.run.vm00:> sudo rm -f -- /home/ubuntu/cephtest/valgrind.supp /usr/bin/daemon-helper /usr/bin/adjust-ulimits /usr/bin/stdin-killer 2026-03-10T13:52:48.412 DEBUG:teuthology.orchestra.run.vm07:> sudo rm -f -- /home/ubuntu/cephtest/valgrind.supp /usr/bin/daemon-helper /usr/bin/adjust-ulimits /usr/bin/stdin-killer 2026-03-10T13:52:48.415 DEBUG:teuthology.orchestra.run.vm08:> sudo rm -f -- /home/ubuntu/cephtest/valgrind.supp /usr/bin/daemon-helper /usr/bin/adjust-ulimits /usr/bin/stdin-killer 2026-03-10T13:52:48.435 INFO:teuthology.task.install.deb:Removing packages: ceph, cephadm, ceph-mds, ceph-mgr, ceph-common, ceph-fuse, ceph-test, ceph-volume, radosgw, python3-rados, python3-rgw, python3-cephfs, python3-rbd, libcephfs2, libcephfs-dev, librados2, librbd1, rbd-fuse on Debian system. 2026-03-10T13:52:48.435 DEBUG:teuthology.orchestra.run.vm00:> for d in ceph cephadm ceph-mds ceph-mgr ceph-common ceph-fuse ceph-test ceph-volume radosgw python3-rados python3-rgw python3-cephfs python3-rbd libcephfs2 libcephfs-dev librados2 librbd1 rbd-fuse ; do sudo DEBIAN_FRONTEND=noninteractive apt-get -y --force-yes -o Dpkg::Options::="--force-confdef" -o Dpkg::Options::="--force-confold" purge $d || true ; done 2026-03-10T13:52:48.441 INFO:teuthology.task.install.deb:Removing packages: ceph, cephadm, ceph-mds, ceph-mgr, ceph-common, ceph-fuse, ceph-test, ceph-volume, radosgw, python3-rados, python3-rgw, python3-cephfs, python3-rbd, libcephfs2, libcephfs-dev, librados2, librbd1, rbd-fuse on Debian system. 2026-03-10T13:52:48.441 DEBUG:teuthology.orchestra.run.vm07:> for d in ceph cephadm ceph-mds ceph-mgr ceph-common ceph-fuse ceph-test ceph-volume radosgw python3-rados python3-rgw python3-cephfs python3-rbd libcephfs2 libcephfs-dev librados2 librbd1 rbd-fuse ; do sudo DEBIAN_FRONTEND=noninteractive apt-get -y --force-yes -o Dpkg::Options::="--force-confdef" -o Dpkg::Options::="--force-confold" purge $d || true ; done 2026-03-10T13:52:48.447 INFO:teuthology.task.install.deb:Removing packages: ceph, cephadm, ceph-mds, ceph-mgr, ceph-common, ceph-fuse, ceph-test, ceph-volume, radosgw, python3-rados, python3-rgw, python3-cephfs, python3-rbd, libcephfs2, libcephfs-dev, librados2, librbd1, rbd-fuse on Debian system. 2026-03-10T13:52:48.447 DEBUG:teuthology.orchestra.run.vm08:> for d in ceph cephadm ceph-mds ceph-mgr ceph-common ceph-fuse ceph-test ceph-volume radosgw python3-rados python3-rgw python3-cephfs python3-rbd libcephfs2 libcephfs-dev librados2 librbd1 rbd-fuse ; do sudo DEBIAN_FRONTEND=noninteractive apt-get -y --force-yes -o Dpkg::Options::="--force-confdef" -o Dpkg::Options::="--force-confold" purge $d || true ; done 2026-03-10T13:52:48.496 INFO:teuthology.orchestra.run.vm00.stdout:Reading package lists... 2026-03-10T13:52:48.509 INFO:teuthology.orchestra.run.vm08.stdout:Reading package lists... 2026-03-10T13:52:48.509 INFO:teuthology.orchestra.run.vm07.stdout:Reading package lists... 2026-03-10T13:52:48.714 INFO:teuthology.orchestra.run.vm00.stdout:Building dependency tree... 2026-03-10T13:52:48.715 INFO:teuthology.orchestra.run.vm00.stdout:Reading state information... 2026-03-10T13:52:48.717 INFO:teuthology.orchestra.run.vm08.stdout:Building dependency tree... 2026-03-10T13:52:48.717 INFO:teuthology.orchestra.run.vm08.stdout:Reading state information... 2026-03-10T13:52:48.719 INFO:teuthology.orchestra.run.vm07.stdout:Building dependency tree... 2026-03-10T13:52:48.720 INFO:teuthology.orchestra.run.vm07.stdout:Reading state information... 2026-03-10T13:52:48.923 INFO:teuthology.orchestra.run.vm08.stdout:The following packages were automatically installed and are no longer required: 2026-03-10T13:52:48.923 INFO:teuthology.orchestra.run.vm08.stdout: ceph-mon kpartx libboost-iostreams1.74.0 libboost-thread1.74.0 libpmemobj1 2026-03-10T13:52:48.924 INFO:teuthology.orchestra.run.vm08.stdout: libsgutils2-2 sg3-utils sg3-utils-udev 2026-03-10T13:52:48.924 INFO:teuthology.orchestra.run.vm08.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-10T13:52:48.937 INFO:teuthology.orchestra.run.vm08.stdout:The following packages will be REMOVED: 2026-03-10T13:52:48.938 INFO:teuthology.orchestra.run.vm08.stdout: ceph* 2026-03-10T13:52:48.954 INFO:teuthology.orchestra.run.vm00.stdout:The following packages were automatically installed and are no longer required: 2026-03-10T13:52:48.955 INFO:teuthology.orchestra.run.vm00.stdout: ceph-mon kpartx libboost-iostreams1.74.0 libboost-thread1.74.0 libpmemobj1 2026-03-10T13:52:48.956 INFO:teuthology.orchestra.run.vm00.stdout: libsgutils2-2 sg3-utils sg3-utils-udev 2026-03-10T13:52:48.956 INFO:teuthology.orchestra.run.vm00.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-10T13:52:48.971 INFO:teuthology.orchestra.run.vm00.stdout:The following packages will be REMOVED: 2026-03-10T13:52:48.972 INFO:teuthology.orchestra.run.vm00.stdout: ceph* 2026-03-10T13:52:48.986 INFO:teuthology.orchestra.run.vm07.stdout:The following packages were automatically installed and are no longer required: 2026-03-10T13:52:48.987 INFO:teuthology.orchestra.run.vm07.stdout: ceph-mon kpartx libboost-iostreams1.74.0 libboost-thread1.74.0 libpmemobj1 2026-03-10T13:52:48.988 INFO:teuthology.orchestra.run.vm07.stdout: libsgutils2-2 sg3-utils sg3-utils-udev 2026-03-10T13:52:48.988 INFO:teuthology.orchestra.run.vm07.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-10T13:52:49.005 INFO:teuthology.orchestra.run.vm07.stdout:The following packages will be REMOVED: 2026-03-10T13:52:49.006 INFO:teuthology.orchestra.run.vm07.stdout: ceph* 2026-03-10T13:52:49.133 INFO:teuthology.orchestra.run.vm08.stdout:0 upgraded, 0 newly installed, 1 to remove and 12 not upgraded. 2026-03-10T13:52:49.133 INFO:teuthology.orchestra.run.vm08.stdout:After this operation, 47.1 kB disk space will be freed. 2026-03-10T13:52:49.158 INFO:teuthology.orchestra.run.vm00.stdout:0 upgraded, 0 newly installed, 1 to remove and 12 not upgraded. 2026-03-10T13:52:49.158 INFO:teuthology.orchestra.run.vm00.stdout:After this operation, 47.1 kB disk space will be freed. 2026-03-10T13:52:49.179 INFO:teuthology.orchestra.run.vm08.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 118605 files and directories currently installed.) 2026-03-10T13:52:49.182 INFO:teuthology.orchestra.run.vm08.stdout:Removing ceph (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T13:52:49.191 INFO:teuthology.orchestra.run.vm00.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 118605 files and directories currently installed.) 2026-03-10T13:52:49.192 INFO:teuthology.orchestra.run.vm07.stdout:0 upgraded, 0 newly installed, 1 to remove and 12 not upgraded. 2026-03-10T13:52:49.192 INFO:teuthology.orchestra.run.vm07.stdout:After this operation, 47.1 kB disk space will be freed. 2026-03-10T13:52:49.193 INFO:teuthology.orchestra.run.vm00.stdout:Removing ceph (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T13:52:49.232 INFO:teuthology.orchestra.run.vm07.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 118605 files and directories currently installed.) 2026-03-10T13:52:49.235 INFO:teuthology.orchestra.run.vm07.stdout:Removing ceph (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T13:52:50.422 INFO:teuthology.orchestra.run.vm08.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-10T13:52:50.456 INFO:teuthology.orchestra.run.vm08.stdout:Reading package lists... 2026-03-10T13:52:50.527 INFO:teuthology.orchestra.run.vm00.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-10T13:52:50.531 INFO:teuthology.orchestra.run.vm07.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-10T13:52:50.565 INFO:teuthology.orchestra.run.vm00.stdout:Reading package lists... 2026-03-10T13:52:50.572 INFO:teuthology.orchestra.run.vm07.stdout:Reading package lists... 2026-03-10T13:52:50.672 INFO:teuthology.orchestra.run.vm08.stdout:Building dependency tree... 2026-03-10T13:52:50.673 INFO:teuthology.orchestra.run.vm08.stdout:Reading state information... 2026-03-10T13:52:50.785 INFO:teuthology.orchestra.run.vm00.stdout:Building dependency tree... 2026-03-10T13:52:50.786 INFO:teuthology.orchestra.run.vm00.stdout:Reading state information... 2026-03-10T13:52:50.802 INFO:teuthology.orchestra.run.vm07.stdout:Building dependency tree... 2026-03-10T13:52:50.802 INFO:teuthology.orchestra.run.vm07.stdout:Reading state information... 2026-03-10T13:52:50.880 INFO:teuthology.orchestra.run.vm08.stdout:The following packages were automatically installed and are no longer required: 2026-03-10T13:52:50.880 INFO:teuthology.orchestra.run.vm08.stdout: ceph-mon kpartx libboost-iostreams1.74.0 libboost-thread1.74.0 libpmemobj1 2026-03-10T13:52:50.881 INFO:teuthology.orchestra.run.vm08.stdout: libsgutils2-2 python-asyncssh-doc python3-asyncssh sg3-utils sg3-utils-udev 2026-03-10T13:52:50.881 INFO:teuthology.orchestra.run.vm08.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-10T13:52:50.898 INFO:teuthology.orchestra.run.vm08.stdout:The following packages will be REMOVED: 2026-03-10T13:52:50.908 INFO:teuthology.orchestra.run.vm08.stdout: ceph-mgr-cephadm* cephadm* 2026-03-10T13:52:51.024 INFO:teuthology.orchestra.run.vm00.stdout:The following packages were automatically installed and are no longer required: 2026-03-10T13:52:51.024 INFO:teuthology.orchestra.run.vm00.stdout: ceph-mon kpartx libboost-iostreams1.74.0 libboost-thread1.74.0 libpmemobj1 2026-03-10T13:52:51.025 INFO:teuthology.orchestra.run.vm00.stdout: libsgutils2-2 python-asyncssh-doc python3-asyncssh sg3-utils sg3-utils-udev 2026-03-10T13:52:51.025 INFO:teuthology.orchestra.run.vm00.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-10T13:52:51.043 INFO:teuthology.orchestra.run.vm00.stdout:The following packages will be REMOVED: 2026-03-10T13:52:51.044 INFO:teuthology.orchestra.run.vm00.stdout: ceph-mgr-cephadm* cephadm* 2026-03-10T13:52:51.048 INFO:teuthology.orchestra.run.vm07.stdout:The following packages were automatically installed and are no longer required: 2026-03-10T13:52:51.048 INFO:teuthology.orchestra.run.vm07.stdout: ceph-mon kpartx libboost-iostreams1.74.0 libboost-thread1.74.0 libpmemobj1 2026-03-10T13:52:51.048 INFO:teuthology.orchestra.run.vm07.stdout: libsgutils2-2 python-asyncssh-doc python3-asyncssh sg3-utils sg3-utils-udev 2026-03-10T13:52:51.048 INFO:teuthology.orchestra.run.vm07.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-10T13:52:51.056 INFO:teuthology.orchestra.run.vm07.stdout:The following packages will be REMOVED: 2026-03-10T13:52:51.056 INFO:teuthology.orchestra.run.vm07.stdout: ceph-mgr-cephadm* cephadm* 2026-03-10T13:52:51.084 INFO:teuthology.orchestra.run.vm08.stdout:0 upgraded, 0 newly installed, 2 to remove and 12 not upgraded. 2026-03-10T13:52:51.084 INFO:teuthology.orchestra.run.vm08.stdout:After this operation, 1775 kB disk space will be freed. 2026-03-10T13:52:51.128 INFO:teuthology.orchestra.run.vm08.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 118603 files and directories currently installed.) 2026-03-10T13:52:51.131 INFO:teuthology.orchestra.run.vm08.stdout:Removing ceph-mgr-cephadm (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T13:52:51.150 INFO:teuthology.orchestra.run.vm08.stdout:Removing cephadm (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T13:52:51.183 INFO:teuthology.orchestra.run.vm08.stdout:Looking for files to backup/remove ... 2026-03-10T13:52:51.184 INFO:teuthology.orchestra.run.vm08.stdout:Not backing up/removing `/var/lib/cephadm', it matches ^/var/.*. 2026-03-10T13:52:51.186 INFO:teuthology.orchestra.run.vm08.stdout:Removing user `cephadm' ... 2026-03-10T13:52:51.187 INFO:teuthology.orchestra.run.vm08.stdout:Warning: group `nogroup' has no more members. 2026-03-10T13:52:51.203 INFO:teuthology.orchestra.run.vm08.stdout:Done. 2026-03-10T13:52:51.227 INFO:teuthology.orchestra.run.vm08.stdout:Processing triggers for man-db (2.10.2-1) ... 2026-03-10T13:52:51.245 INFO:teuthology.orchestra.run.vm00.stdout:0 upgraded, 0 newly installed, 2 to remove and 12 not upgraded. 2026-03-10T13:52:51.245 INFO:teuthology.orchestra.run.vm00.stdout:After this operation, 1775 kB disk space will be freed. 2026-03-10T13:52:51.254 INFO:teuthology.orchestra.run.vm07.stdout:0 upgraded, 0 newly installed, 2 to remove and 12 not upgraded. 2026-03-10T13:52:51.254 INFO:teuthology.orchestra.run.vm07.stdout:After this operation, 1775 kB disk space will be freed. 2026-03-10T13:52:51.290 INFO:teuthology.orchestra.run.vm00.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 118603 files and directories currently installed.) 2026-03-10T13:52:51.293 INFO:teuthology.orchestra.run.vm00.stdout:Removing ceph-mgr-cephadm (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T13:52:51.295 INFO:teuthology.orchestra.run.vm07.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 118603 files and directories currently installed.) 2026-03-10T13:52:51.297 INFO:teuthology.orchestra.run.vm07.stdout:Removing ceph-mgr-cephadm (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T13:52:51.314 INFO:teuthology.orchestra.run.vm00.stdout:Removing cephadm (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T13:52:51.315 INFO:teuthology.orchestra.run.vm07.stdout:Removing cephadm (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T13:52:51.339 INFO:teuthology.orchestra.run.vm08.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 118529 files and directories currently installed.) 2026-03-10T13:52:51.342 INFO:teuthology.orchestra.run.vm08.stdout:Purging configuration files for cephadm (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T13:52:51.346 INFO:teuthology.orchestra.run.vm00.stdout:Looking for files to backup/remove ... 2026-03-10T13:52:51.348 INFO:teuthology.orchestra.run.vm00.stdout:Not backing up/removing `/var/lib/cephadm', it matches ^/var/.*. 2026-03-10T13:52:51.350 INFO:teuthology.orchestra.run.vm07.stdout:Looking for files to backup/remove ... 2026-03-10T13:52:51.350 INFO:teuthology.orchestra.run.vm00.stdout:Removing user `cephadm' ... 2026-03-10T13:52:51.350 INFO:teuthology.orchestra.run.vm00.stdout:Warning: group `nogroup' has no more members. 2026-03-10T13:52:51.351 INFO:teuthology.orchestra.run.vm07.stdout:Not backing up/removing `/var/lib/cephadm', it matches ^/var/.*. 2026-03-10T13:52:51.353 INFO:teuthology.orchestra.run.vm07.stdout:Removing user `cephadm' ... 2026-03-10T13:52:51.353 INFO:teuthology.orchestra.run.vm07.stdout:Warning: group `nogroup' has no more members. 2026-03-10T13:52:51.360 INFO:teuthology.orchestra.run.vm00.stdout:Done. 2026-03-10T13:52:51.362 INFO:teuthology.orchestra.run.vm07.stdout:Done. 2026-03-10T13:52:51.386 INFO:teuthology.orchestra.run.vm00.stdout:Processing triggers for man-db (2.10.2-1) ... 2026-03-10T13:52:51.392 INFO:teuthology.orchestra.run.vm07.stdout:Processing triggers for man-db (2.10.2-1) ... 2026-03-10T13:52:51.507 INFO:teuthology.orchestra.run.vm00.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 118529 files and directories currently installed.) 2026-03-10T13:52:51.510 INFO:teuthology.orchestra.run.vm00.stdout:Purging configuration files for cephadm (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T13:52:51.512 INFO:teuthology.orchestra.run.vm07.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 118529 files and directories currently installed.) 2026-03-10T13:52:51.517 INFO:teuthology.orchestra.run.vm07.stdout:Purging configuration files for cephadm (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T13:52:52.601 INFO:teuthology.orchestra.run.vm08.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-10T13:52:52.639 INFO:teuthology.orchestra.run.vm08.stdout:Reading package lists... 2026-03-10T13:52:52.806 INFO:teuthology.orchestra.run.vm00.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-10T13:52:52.849 INFO:teuthology.orchestra.run.vm00.stdout:Reading package lists... 2026-03-10T13:52:52.867 INFO:teuthology.orchestra.run.vm07.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-10T13:52:52.873 INFO:teuthology.orchestra.run.vm08.stdout:Building dependency tree... 2026-03-10T13:52:52.873 INFO:teuthology.orchestra.run.vm08.stdout:Reading state information... 2026-03-10T13:52:52.904 INFO:teuthology.orchestra.run.vm07.stdout:Reading package lists... 2026-03-10T13:52:53.090 INFO:teuthology.orchestra.run.vm00.stdout:Building dependency tree... 2026-03-10T13:52:53.091 INFO:teuthology.orchestra.run.vm00.stdout:Reading state information... 2026-03-10T13:52:53.109 INFO:teuthology.orchestra.run.vm08.stdout:The following packages were automatically installed and are no longer required: 2026-03-10T13:52:53.109 INFO:teuthology.orchestra.run.vm08.stdout: ceph-mon kpartx libboost-iostreams1.74.0 libboost-thread1.74.0 libpmemobj1 2026-03-10T13:52:53.110 INFO:teuthology.orchestra.run.vm08.stdout: libsgutils2-2 python-asyncssh-doc python3-asyncssh sg3-utils sg3-utils-udev 2026-03-10T13:52:53.110 INFO:teuthology.orchestra.run.vm08.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-10T13:52:53.128 INFO:teuthology.orchestra.run.vm08.stdout:The following packages will be REMOVED: 2026-03-10T13:52:53.129 INFO:teuthology.orchestra.run.vm08.stdout: ceph-mds* 2026-03-10T13:52:53.146 INFO:teuthology.orchestra.run.vm07.stdout:Building dependency tree... 2026-03-10T13:52:53.146 INFO:teuthology.orchestra.run.vm07.stdout:Reading state information... 2026-03-10T13:52:53.328 INFO:teuthology.orchestra.run.vm08.stdout:0 upgraded, 0 newly installed, 1 to remove and 12 not upgraded. 2026-03-10T13:52:53.328 INFO:teuthology.orchestra.run.vm08.stdout:After this operation, 7437 kB disk space will be freed. 2026-03-10T13:52:53.352 INFO:teuthology.orchestra.run.vm00.stdout:The following packages were automatically installed and are no longer required: 2026-03-10T13:52:53.352 INFO:teuthology.orchestra.run.vm00.stdout: ceph-mon kpartx libboost-iostreams1.74.0 libboost-thread1.74.0 libpmemobj1 2026-03-10T13:52:53.353 INFO:teuthology.orchestra.run.vm00.stdout: libsgutils2-2 python-asyncssh-doc python3-asyncssh sg3-utils sg3-utils-udev 2026-03-10T13:52:53.353 INFO:teuthology.orchestra.run.vm00.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-10T13:52:53.365 INFO:teuthology.orchestra.run.vm08.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 118529 files and directories currently installed.) 2026-03-10T13:52:53.367 INFO:teuthology.orchestra.run.vm08.stdout:Removing ceph-mds (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T13:52:53.371 INFO:teuthology.orchestra.run.vm07.stdout:The following packages were automatically installed and are no longer required: 2026-03-10T13:52:53.371 INFO:teuthology.orchestra.run.vm07.stdout: ceph-mon kpartx libboost-iostreams1.74.0 libboost-thread1.74.0 libpmemobj1 2026-03-10T13:52:53.372 INFO:teuthology.orchestra.run.vm00.stdout:The following packages will be REMOVED: 2026-03-10T13:52:53.372 INFO:teuthology.orchestra.run.vm07.stdout: libsgutils2-2 python-asyncssh-doc python3-asyncssh sg3-utils sg3-utils-udev 2026-03-10T13:52:53.372 INFO:teuthology.orchestra.run.vm07.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-10T13:52:53.373 INFO:teuthology.orchestra.run.vm00.stdout: ceph-mds* 2026-03-10T13:52:53.380 INFO:teuthology.orchestra.run.vm07.stdout:The following packages will be REMOVED: 2026-03-10T13:52:53.381 INFO:teuthology.orchestra.run.vm07.stdout: ceph-mds* 2026-03-10T13:52:53.555 INFO:teuthology.orchestra.run.vm00.stdout:0 upgraded, 0 newly installed, 1 to remove and 12 not upgraded. 2026-03-10T13:52:53.556 INFO:teuthology.orchestra.run.vm00.stdout:After this operation, 7437 kB disk space will be freed. 2026-03-10T13:52:53.575 INFO:teuthology.orchestra.run.vm07.stdout:0 upgraded, 0 newly installed, 1 to remove and 12 not upgraded. 2026-03-10T13:52:53.575 INFO:teuthology.orchestra.run.vm07.stdout:After this operation, 7437 kB disk space will be freed. 2026-03-10T13:52:53.603 INFO:teuthology.orchestra.run.vm00.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 118529 files and directories currently installed.) 2026-03-10T13:52:53.607 INFO:teuthology.orchestra.run.vm00.stdout:Removing ceph-mds (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T13:52:53.622 INFO:teuthology.orchestra.run.vm07.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 118529 files and directories currently installed.) 2026-03-10T13:52:53.625 INFO:teuthology.orchestra.run.vm07.stdout:Removing ceph-mds (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T13:52:53.775 INFO:teuthology.orchestra.run.vm08.stdout:Processing triggers for man-db (2.10.2-1) ... 2026-03-10T13:52:53.881 INFO:teuthology.orchestra.run.vm08.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 118521 files and directories currently installed.) 2026-03-10T13:52:53.884 INFO:teuthology.orchestra.run.vm08.stdout:Purging configuration files for ceph-mds (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T13:52:54.039 INFO:teuthology.orchestra.run.vm07.stdout:Processing triggers for man-db (2.10.2-1) ... 2026-03-10T13:52:54.061 INFO:teuthology.orchestra.run.vm00.stdout:Processing triggers for man-db (2.10.2-1) ... 2026-03-10T13:52:54.148 INFO:teuthology.orchestra.run.vm07.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 118521 files and directories currently installed.) 2026-03-10T13:52:54.150 INFO:teuthology.orchestra.run.vm07.stdout:Purging configuration files for ceph-mds (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T13:52:54.169 INFO:teuthology.orchestra.run.vm00.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 118521 files and directories currently installed.) 2026-03-10T13:52:54.172 INFO:teuthology.orchestra.run.vm00.stdout:Purging configuration files for ceph-mds (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T13:52:55.549 INFO:teuthology.orchestra.run.vm08.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-10T13:52:55.584 INFO:teuthology.orchestra.run.vm08.stdout:Reading package lists... 2026-03-10T13:52:55.664 INFO:teuthology.orchestra.run.vm07.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-10T13:52:55.701 INFO:teuthology.orchestra.run.vm07.stdout:Reading package lists... 2026-03-10T13:52:55.739 INFO:teuthology.orchestra.run.vm00.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-10T13:52:55.774 INFO:teuthology.orchestra.run.vm00.stdout:Reading package lists... 2026-03-10T13:52:55.797 INFO:teuthology.orchestra.run.vm08.stdout:Building dependency tree... 2026-03-10T13:52:55.798 INFO:teuthology.orchestra.run.vm08.stdout:Reading state information... 2026-03-10T13:52:55.929 INFO:teuthology.orchestra.run.vm07.stdout:Building dependency tree... 2026-03-10T13:52:55.930 INFO:teuthology.orchestra.run.vm07.stdout:Reading state information... 2026-03-10T13:52:55.984 INFO:teuthology.orchestra.run.vm00.stdout:Building dependency tree... 2026-03-10T13:52:55.985 INFO:teuthology.orchestra.run.vm00.stdout:Reading state information... 2026-03-10T13:52:56.021 INFO:teuthology.orchestra.run.vm08.stdout:The following packages were automatically installed and are no longer required: 2026-03-10T13:52:56.021 INFO:teuthology.orchestra.run.vm08.stdout: ceph-mgr-modules-core ceph-mon kpartx libboost-iostreams1.74.0 2026-03-10T13:52:56.022 INFO:teuthology.orchestra.run.vm08.stdout: libboost-thread1.74.0 libpmemobj1 libsgutils2-2 python-asyncssh-doc 2026-03-10T13:52:56.022 INFO:teuthology.orchestra.run.vm08.stdout: python-pastedeploy-tpl python3-asyncssh python3-cachetools python3-cheroot 2026-03-10T13:52:56.022 INFO:teuthology.orchestra.run.vm08.stdout: python3-cherrypy3 python3-google-auth python3-jaraco.classes 2026-03-10T13:52:56.022 INFO:teuthology.orchestra.run.vm08.stdout: python3-jaraco.collections python3-jaraco.functools python3-jaraco.text 2026-03-10T13:52:56.022 INFO:teuthology.orchestra.run.vm08.stdout: python3-joblib python3-kubernetes python3-logutils python3-mako 2026-03-10T13:52:56.023 INFO:teuthology.orchestra.run.vm08.stdout: python3-natsort python3-paste python3-pastedeploy python3-pastescript 2026-03-10T13:52:56.023 INFO:teuthology.orchestra.run.vm08.stdout: python3-pecan python3-portend python3-psutil python3-pyinotify 2026-03-10T13:52:56.023 INFO:teuthology.orchestra.run.vm08.stdout: python3-repoze.lru python3-requests-oauthlib python3-routes python3-rsa 2026-03-10T13:52:56.023 INFO:teuthology.orchestra.run.vm08.stdout: python3-simplegeneric python3-simplejson python3-singledispatch 2026-03-10T13:52:56.023 INFO:teuthology.orchestra.run.vm08.stdout: python3-sklearn python3-sklearn-lib python3-tempita python3-tempora 2026-03-10T13:52:56.023 INFO:teuthology.orchestra.run.vm08.stdout: python3-threadpoolctl python3-waitress python3-webob python3-websocket 2026-03-10T13:52:56.023 INFO:teuthology.orchestra.run.vm08.stdout: python3-webtest python3-werkzeug python3-zc.lockfile sg3-utils 2026-03-10T13:52:56.023 INFO:teuthology.orchestra.run.vm08.stdout: sg3-utils-udev 2026-03-10T13:52:56.023 INFO:teuthology.orchestra.run.vm08.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-10T13:52:56.038 INFO:teuthology.orchestra.run.vm08.stdout:The following packages will be REMOVED: 2026-03-10T13:52:56.038 INFO:teuthology.orchestra.run.vm08.stdout: ceph-mgr* ceph-mgr-dashboard* ceph-mgr-diskprediction-local* 2026-03-10T13:52:56.039 INFO:teuthology.orchestra.run.vm08.stdout: ceph-mgr-k8sevents* 2026-03-10T13:52:56.191 INFO:teuthology.orchestra.run.vm07.stdout:The following packages were automatically installed and are no longer required: 2026-03-10T13:52:56.191 INFO:teuthology.orchestra.run.vm07.stdout: ceph-mgr-modules-core ceph-mon kpartx libboost-iostreams1.74.0 2026-03-10T13:52:56.192 INFO:teuthology.orchestra.run.vm07.stdout: libboost-thread1.74.0 libpmemobj1 libsgutils2-2 python-asyncssh-doc 2026-03-10T13:52:56.192 INFO:teuthology.orchestra.run.vm07.stdout: python-pastedeploy-tpl python3-asyncssh python3-cachetools python3-cheroot 2026-03-10T13:52:56.192 INFO:teuthology.orchestra.run.vm07.stdout: python3-cherrypy3 python3-google-auth python3-jaraco.classes 2026-03-10T13:52:56.192 INFO:teuthology.orchestra.run.vm07.stdout: python3-jaraco.collections python3-jaraco.functools python3-jaraco.text 2026-03-10T13:52:56.192 INFO:teuthology.orchestra.run.vm07.stdout: python3-joblib python3-kubernetes python3-logutils python3-mako 2026-03-10T13:52:56.192 INFO:teuthology.orchestra.run.vm07.stdout: python3-natsort python3-paste python3-pastedeploy python3-pastescript 2026-03-10T13:52:56.192 INFO:teuthology.orchestra.run.vm07.stdout: python3-pecan python3-portend python3-psutil python3-pyinotify 2026-03-10T13:52:56.192 INFO:teuthology.orchestra.run.vm07.stdout: python3-repoze.lru python3-requests-oauthlib python3-routes python3-rsa 2026-03-10T13:52:56.192 INFO:teuthology.orchestra.run.vm07.stdout: python3-simplegeneric python3-simplejson python3-singledispatch 2026-03-10T13:52:56.192 INFO:teuthology.orchestra.run.vm07.stdout: python3-sklearn python3-sklearn-lib python3-tempita python3-tempora 2026-03-10T13:52:56.192 INFO:teuthology.orchestra.run.vm07.stdout: python3-threadpoolctl python3-waitress python3-webob python3-websocket 2026-03-10T13:52:56.192 INFO:teuthology.orchestra.run.vm07.stdout: python3-webtest python3-werkzeug python3-zc.lockfile sg3-utils 2026-03-10T13:52:56.193 INFO:teuthology.orchestra.run.vm07.stdout: sg3-utils-udev 2026-03-10T13:52:56.193 INFO:teuthology.orchestra.run.vm07.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-10T13:52:56.209 INFO:teuthology.orchestra.run.vm07.stdout:The following packages will be REMOVED: 2026-03-10T13:52:56.209 INFO:teuthology.orchestra.run.vm07.stdout: ceph-mgr* ceph-mgr-dashboard* ceph-mgr-diskprediction-local* 2026-03-10T13:52:56.211 INFO:teuthology.orchestra.run.vm07.stdout: ceph-mgr-k8sevents* 2026-03-10T13:52:56.238 INFO:teuthology.orchestra.run.vm08.stdout:0 upgraded, 0 newly installed, 4 to remove and 12 not upgraded. 2026-03-10T13:52:56.239 INFO:teuthology.orchestra.run.vm08.stdout:After this operation, 165 MB disk space will be freed. 2026-03-10T13:52:56.246 INFO:teuthology.orchestra.run.vm00.stdout:The following packages were automatically installed and are no longer required: 2026-03-10T13:52:56.246 INFO:teuthology.orchestra.run.vm00.stdout: ceph-mgr-modules-core ceph-mon kpartx libboost-iostreams1.74.0 2026-03-10T13:52:56.247 INFO:teuthology.orchestra.run.vm00.stdout: libboost-thread1.74.0 libpmemobj1 libsgutils2-2 python-asyncssh-doc 2026-03-10T13:52:56.247 INFO:teuthology.orchestra.run.vm00.stdout: python-pastedeploy-tpl python3-asyncssh python3-cachetools python3-cheroot 2026-03-10T13:52:56.247 INFO:teuthology.orchestra.run.vm00.stdout: python3-cherrypy3 python3-google-auth python3-jaraco.classes 2026-03-10T13:52:56.248 INFO:teuthology.orchestra.run.vm00.stdout: python3-jaraco.collections python3-jaraco.functools python3-jaraco.text 2026-03-10T13:52:56.248 INFO:teuthology.orchestra.run.vm00.stdout: python3-joblib python3-kubernetes python3-logutils python3-mako 2026-03-10T13:52:56.248 INFO:teuthology.orchestra.run.vm00.stdout: python3-natsort python3-paste python3-pastedeploy python3-pastescript 2026-03-10T13:52:56.248 INFO:teuthology.orchestra.run.vm00.stdout: python3-pecan python3-portend python3-psutil python3-pyinotify 2026-03-10T13:52:56.248 INFO:teuthology.orchestra.run.vm00.stdout: python3-repoze.lru python3-requests-oauthlib python3-routes python3-rsa 2026-03-10T13:52:56.248 INFO:teuthology.orchestra.run.vm00.stdout: python3-simplegeneric python3-simplejson python3-singledispatch 2026-03-10T13:52:56.248 INFO:teuthology.orchestra.run.vm00.stdout: python3-sklearn python3-sklearn-lib python3-tempita python3-tempora 2026-03-10T13:52:56.248 INFO:teuthology.orchestra.run.vm00.stdout: python3-threadpoolctl python3-waitress python3-webob python3-websocket 2026-03-10T13:52:56.248 INFO:teuthology.orchestra.run.vm00.stdout: python3-webtest python3-werkzeug python3-zc.lockfile sg3-utils 2026-03-10T13:52:56.248 INFO:teuthology.orchestra.run.vm00.stdout: sg3-utils-udev 2026-03-10T13:52:56.248 INFO:teuthology.orchestra.run.vm00.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-10T13:52:56.262 INFO:teuthology.orchestra.run.vm00.stdout:The following packages will be REMOVED: 2026-03-10T13:52:56.263 INFO:teuthology.orchestra.run.vm00.stdout: ceph-mgr* ceph-mgr-dashboard* ceph-mgr-diskprediction-local* 2026-03-10T13:52:56.264 INFO:teuthology.orchestra.run.vm00.stdout: ceph-mgr-k8sevents* 2026-03-10T13:52:56.281 INFO:teuthology.orchestra.run.vm08.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 118521 files and directories currently installed.) 2026-03-10T13:52:56.283 INFO:teuthology.orchestra.run.vm08.stdout:Removing ceph-mgr-k8sevents (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T13:52:56.295 INFO:teuthology.orchestra.run.vm08.stdout:Removing ceph-mgr-diskprediction-local (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T13:52:56.327 INFO:teuthology.orchestra.run.vm08.stdout:Removing ceph-mgr-dashboard (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T13:52:56.370 INFO:teuthology.orchestra.run.vm08.stdout:Removing ceph-mgr (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T13:52:56.396 INFO:teuthology.orchestra.run.vm07.stdout:0 upgraded, 0 newly installed, 4 to remove and 12 not upgraded. 2026-03-10T13:52:56.396 INFO:teuthology.orchestra.run.vm07.stdout:After this operation, 165 MB disk space will be freed. 2026-03-10T13:52:56.433 INFO:teuthology.orchestra.run.vm07.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 118521 files and directories currently installed.) 2026-03-10T13:52:56.435 INFO:teuthology.orchestra.run.vm07.stdout:Removing ceph-mgr-k8sevents (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T13:52:56.446 INFO:teuthology.orchestra.run.vm00.stdout:0 upgraded, 0 newly installed, 4 to remove and 12 not upgraded. 2026-03-10T13:52:56.446 INFO:teuthology.orchestra.run.vm00.stdout:After this operation, 165 MB disk space will be freed. 2026-03-10T13:52:56.446 INFO:teuthology.orchestra.run.vm07.stdout:Removing ceph-mgr-diskprediction-local (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T13:52:56.472 INFO:teuthology.orchestra.run.vm07.stdout:Removing ceph-mgr-dashboard (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T13:52:56.487 INFO:teuthology.orchestra.run.vm00.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 118521 files and directories currently installed.) 2026-03-10T13:52:56.489 INFO:teuthology.orchestra.run.vm00.stdout:Removing ceph-mgr-k8sevents (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T13:52:56.503 INFO:teuthology.orchestra.run.vm00.stdout:Removing ceph-mgr-diskprediction-local (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T13:52:56.512 INFO:teuthology.orchestra.run.vm07.stdout:Removing ceph-mgr (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T13:52:56.531 INFO:teuthology.orchestra.run.vm00.stdout:Removing ceph-mgr-dashboard (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T13:52:56.573 INFO:teuthology.orchestra.run.vm00.stdout:Removing ceph-mgr (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T13:52:56.904 INFO:teuthology.orchestra.run.vm08.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 117937 files and directories currently installed.) 2026-03-10T13:52:56.906 INFO:teuthology.orchestra.run.vm08.stdout:Purging configuration files for ceph-mgr (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T13:52:57.034 INFO:teuthology.orchestra.run.vm07.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 117937 files and directories currently installed.) 2026-03-10T13:52:57.035 INFO:teuthology.orchestra.run.vm07.stdout:Purging configuration files for ceph-mgr (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T13:52:57.071 INFO:teuthology.orchestra.run.vm00.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 117937 files and directories currently installed.) 2026-03-10T13:52:57.073 INFO:teuthology.orchestra.run.vm00.stdout:Purging configuration files for ceph-mgr (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T13:52:58.488 INFO:teuthology.orchestra.run.vm08.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-10T13:52:58.496 INFO:teuthology.orchestra.run.vm00.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-10T13:52:58.523 INFO:teuthology.orchestra.run.vm08.stdout:Reading package lists... 2026-03-10T13:52:58.532 INFO:teuthology.orchestra.run.vm00.stdout:Reading package lists... 2026-03-10T13:52:58.583 INFO:teuthology.orchestra.run.vm07.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-10T13:52:58.617 INFO:teuthology.orchestra.run.vm07.stdout:Reading package lists... 2026-03-10T13:52:58.688 INFO:teuthology.orchestra.run.vm00.stdout:Building dependency tree... 2026-03-10T13:52:58.688 INFO:teuthology.orchestra.run.vm00.stdout:Reading state information... 2026-03-10T13:52:58.713 INFO:teuthology.orchestra.run.vm08.stdout:Building dependency tree... 2026-03-10T13:52:58.714 INFO:teuthology.orchestra.run.vm08.stdout:Reading state information... 2026-03-10T13:52:58.811 INFO:teuthology.orchestra.run.vm07.stdout:Building dependency tree... 2026-03-10T13:52:58.811 INFO:teuthology.orchestra.run.vm07.stdout:Reading state information... 2026-03-10T13:52:58.862 INFO:teuthology.orchestra.run.vm00.stdout:The following packages were automatically installed and are no longer required: 2026-03-10T13:52:58.863 INFO:teuthology.orchestra.run.vm00.stdout: ceph-mgr-modules-core jq kpartx libboost-iostreams1.74.0 2026-03-10T13:52:58.863 INFO:teuthology.orchestra.run.vm00.stdout: libboost-thread1.74.0 libjq1 liboath0 libonig5 libpmemobj1 libradosstriper1 2026-03-10T13:52:58.864 INFO:teuthology.orchestra.run.vm00.stdout: libsgutils2-2 libsqlite3-mod-ceph nvme-cli python-asyncssh-doc 2026-03-10T13:52:58.864 INFO:teuthology.orchestra.run.vm00.stdout: python-pastedeploy-tpl python3-asyncssh python3-cachetools 2026-03-10T13:52:58.864 INFO:teuthology.orchestra.run.vm00.stdout: python3-ceph-common python3-cheroot python3-cherrypy3 python3-google-auth 2026-03-10T13:52:58.864 INFO:teuthology.orchestra.run.vm00.stdout: python3-jaraco.classes python3-jaraco.collections python3-jaraco.functools 2026-03-10T13:52:58.864 INFO:teuthology.orchestra.run.vm00.stdout: python3-jaraco.text python3-joblib python3-kubernetes python3-logutils 2026-03-10T13:52:58.864 INFO:teuthology.orchestra.run.vm00.stdout: python3-mako python3-natsort python3-paste python3-pastedeploy 2026-03-10T13:52:58.864 INFO:teuthology.orchestra.run.vm00.stdout: python3-pastescript python3-pecan python3-portend python3-prettytable 2026-03-10T13:52:58.864 INFO:teuthology.orchestra.run.vm00.stdout: python3-psutil python3-pyinotify python3-repoze.lru 2026-03-10T13:52:58.864 INFO:teuthology.orchestra.run.vm00.stdout: python3-requests-oauthlib python3-routes python3-rsa python3-simplegeneric 2026-03-10T13:52:58.864 INFO:teuthology.orchestra.run.vm00.stdout: python3-simplejson python3-singledispatch python3-sklearn 2026-03-10T13:52:58.864 INFO:teuthology.orchestra.run.vm00.stdout: python3-sklearn-lib python3-tempita python3-tempora python3-threadpoolctl 2026-03-10T13:52:58.864 INFO:teuthology.orchestra.run.vm00.stdout: python3-waitress python3-wcwidth python3-webob python3-websocket 2026-03-10T13:52:58.864 INFO:teuthology.orchestra.run.vm00.stdout: python3-webtest python3-werkzeug python3-zc.lockfile sg3-utils 2026-03-10T13:52:58.864 INFO:teuthology.orchestra.run.vm00.stdout: sg3-utils-udev smartmontools socat xmlstarlet 2026-03-10T13:52:58.864 INFO:teuthology.orchestra.run.vm00.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-10T13:52:58.880 INFO:teuthology.orchestra.run.vm00.stdout:The following packages will be REMOVED: 2026-03-10T13:52:58.881 INFO:teuthology.orchestra.run.vm00.stdout: ceph-base* ceph-common* ceph-mon* ceph-osd* ceph-test* ceph-volume* radosgw* 2026-03-10T13:52:58.944 INFO:teuthology.orchestra.run.vm08.stdout:The following packages were automatically installed and are no longer required: 2026-03-10T13:52:58.945 INFO:teuthology.orchestra.run.vm08.stdout: ceph-mgr-modules-core jq kpartx libboost-iostreams1.74.0 2026-03-10T13:52:58.945 INFO:teuthology.orchestra.run.vm08.stdout: libboost-thread1.74.0 libjq1 liboath0 libonig5 libpmemobj1 libradosstriper1 2026-03-10T13:52:58.945 INFO:teuthology.orchestra.run.vm08.stdout: libsgutils2-2 libsqlite3-mod-ceph nvme-cli python-asyncssh-doc 2026-03-10T13:52:58.945 INFO:teuthology.orchestra.run.vm08.stdout: python-pastedeploy-tpl python3-asyncssh python3-cachetools 2026-03-10T13:52:58.945 INFO:teuthology.orchestra.run.vm08.stdout: python3-ceph-common python3-cheroot python3-cherrypy3 python3-google-auth 2026-03-10T13:52:58.945 INFO:teuthology.orchestra.run.vm08.stdout: python3-jaraco.classes python3-jaraco.collections python3-jaraco.functools 2026-03-10T13:52:58.945 INFO:teuthology.orchestra.run.vm08.stdout: python3-jaraco.text python3-joblib python3-kubernetes python3-logutils 2026-03-10T13:52:58.945 INFO:teuthology.orchestra.run.vm08.stdout: python3-mako python3-natsort python3-paste python3-pastedeploy 2026-03-10T13:52:58.945 INFO:teuthology.orchestra.run.vm08.stdout: python3-pastescript python3-pecan python3-portend python3-prettytable 2026-03-10T13:52:58.945 INFO:teuthology.orchestra.run.vm08.stdout: python3-psutil python3-pyinotify python3-repoze.lru 2026-03-10T13:52:58.945 INFO:teuthology.orchestra.run.vm08.stdout: python3-requests-oauthlib python3-routes python3-rsa python3-simplegeneric 2026-03-10T13:52:58.945 INFO:teuthology.orchestra.run.vm08.stdout: python3-simplejson python3-singledispatch python3-sklearn 2026-03-10T13:52:58.945 INFO:teuthology.orchestra.run.vm08.stdout: python3-sklearn-lib python3-tempita python3-tempora python3-threadpoolctl 2026-03-10T13:52:58.945 INFO:teuthology.orchestra.run.vm08.stdout: python3-waitress python3-wcwidth python3-webob python3-websocket 2026-03-10T13:52:58.945 INFO:teuthology.orchestra.run.vm08.stdout: python3-webtest python3-werkzeug python3-zc.lockfile sg3-utils 2026-03-10T13:52:58.945 INFO:teuthology.orchestra.run.vm08.stdout: sg3-utils-udev smartmontools socat xmlstarlet 2026-03-10T13:52:58.945 INFO:teuthology.orchestra.run.vm08.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-10T13:52:58.956 INFO:teuthology.orchestra.run.vm08.stdout:The following packages will be REMOVED: 2026-03-10T13:52:58.956 INFO:teuthology.orchestra.run.vm08.stdout: ceph-base* ceph-common* ceph-mon* ceph-osd* ceph-test* ceph-volume* radosgw* 2026-03-10T13:52:58.996 INFO:teuthology.orchestra.run.vm07.stdout:The following packages were automatically installed and are no longer required: 2026-03-10T13:52:58.997 INFO:teuthology.orchestra.run.vm07.stdout: ceph-mgr-modules-core jq kpartx libboost-iostreams1.74.0 2026-03-10T13:52:58.997 INFO:teuthology.orchestra.run.vm07.stdout: libboost-thread1.74.0 libjq1 liboath0 libonig5 libpmemobj1 libradosstriper1 2026-03-10T13:52:58.997 INFO:teuthology.orchestra.run.vm07.stdout: libsgutils2-2 libsqlite3-mod-ceph nvme-cli python-asyncssh-doc 2026-03-10T13:52:58.997 INFO:teuthology.orchestra.run.vm07.stdout: python-pastedeploy-tpl python3-asyncssh python3-cachetools 2026-03-10T13:52:58.997 INFO:teuthology.orchestra.run.vm07.stdout: python3-ceph-common python3-cheroot python3-cherrypy3 python3-google-auth 2026-03-10T13:52:58.997 INFO:teuthology.orchestra.run.vm07.stdout: python3-jaraco.classes python3-jaraco.collections python3-jaraco.functools 2026-03-10T13:52:58.997 INFO:teuthology.orchestra.run.vm07.stdout: python3-jaraco.text python3-joblib python3-kubernetes python3-logutils 2026-03-10T13:52:58.997 INFO:teuthology.orchestra.run.vm07.stdout: python3-mako python3-natsort python3-paste python3-pastedeploy 2026-03-10T13:52:58.997 INFO:teuthology.orchestra.run.vm07.stdout: python3-pastescript python3-pecan python3-portend python3-prettytable 2026-03-10T13:52:58.997 INFO:teuthology.orchestra.run.vm07.stdout: python3-psutil python3-pyinotify python3-repoze.lru 2026-03-10T13:52:58.997 INFO:teuthology.orchestra.run.vm07.stdout: python3-requests-oauthlib python3-routes python3-rsa python3-simplegeneric 2026-03-10T13:52:58.997 INFO:teuthology.orchestra.run.vm07.stdout: python3-simplejson python3-singledispatch python3-sklearn 2026-03-10T13:52:58.997 INFO:teuthology.orchestra.run.vm07.stdout: python3-sklearn-lib python3-tempita python3-tempora python3-threadpoolctl 2026-03-10T13:52:58.997 INFO:teuthology.orchestra.run.vm07.stdout: python3-waitress python3-wcwidth python3-webob python3-websocket 2026-03-10T13:52:58.997 INFO:teuthology.orchestra.run.vm07.stdout: python3-webtest python3-werkzeug python3-zc.lockfile sg3-utils 2026-03-10T13:52:58.998 INFO:teuthology.orchestra.run.vm07.stdout: sg3-utils-udev smartmontools socat xmlstarlet 2026-03-10T13:52:58.998 INFO:teuthology.orchestra.run.vm07.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-10T13:52:59.010 INFO:teuthology.orchestra.run.vm07.stdout:The following packages will be REMOVED: 2026-03-10T13:52:59.011 INFO:teuthology.orchestra.run.vm07.stdout: ceph-base* ceph-common* ceph-mon* ceph-osd* ceph-test* ceph-volume* radosgw* 2026-03-10T13:52:59.066 INFO:teuthology.orchestra.run.vm00.stdout:0 upgraded, 0 newly installed, 7 to remove and 12 not upgraded. 2026-03-10T13:52:59.066 INFO:teuthology.orchestra.run.vm00.stdout:After this operation, 472 MB disk space will be freed. 2026-03-10T13:52:59.099 INFO:teuthology.orchestra.run.vm00.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 117937 files and directories currently installed.) 2026-03-10T13:52:59.100 INFO:teuthology.orchestra.run.vm00.stdout:Removing ceph-volume (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T13:52:59.134 INFO:teuthology.orchestra.run.vm08.stdout:0 upgraded, 0 newly installed, 7 to remove and 12 not upgraded. 2026-03-10T13:52:59.134 INFO:teuthology.orchestra.run.vm08.stdout:After this operation, 472 MB disk space will be freed. 2026-03-10T13:52:59.159 INFO:teuthology.orchestra.run.vm00.stdout:Removing ceph-osd (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T13:52:59.179 INFO:teuthology.orchestra.run.vm08.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 117937 files and directories currently installed.) 2026-03-10T13:52:59.181 INFO:teuthology.orchestra.run.vm08.stdout:Removing ceph-volume (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T13:52:59.184 INFO:teuthology.orchestra.run.vm07.stdout:0 upgraded, 0 newly installed, 7 to remove and 12 not upgraded. 2026-03-10T13:52:59.184 INFO:teuthology.orchestra.run.vm07.stdout:After this operation, 472 MB disk space will be freed. 2026-03-10T13:52:59.225 INFO:teuthology.orchestra.run.vm07.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 117937 files and directories currently installed.) 2026-03-10T13:52:59.227 INFO:teuthology.orchestra.run.vm07.stdout:Removing ceph-volume (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T13:52:59.239 INFO:teuthology.orchestra.run.vm08.stdout:Removing ceph-osd (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T13:52:59.287 INFO:teuthology.orchestra.run.vm07.stdout:Removing ceph-osd (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T13:52:59.566 INFO:teuthology.orchestra.run.vm00.stdout:Removing ceph-mon (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T13:52:59.689 INFO:teuthology.orchestra.run.vm08.stdout:Removing ceph-mon (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T13:52:59.712 INFO:teuthology.orchestra.run.vm07.stdout:Removing ceph-mon (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T13:52:59.982 INFO:teuthology.orchestra.run.vm00.stdout:Removing ceph-base (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T13:53:00.080 INFO:teuthology.orchestra.run.vm08.stdout:Removing ceph-base (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T13:53:00.142 INFO:teuthology.orchestra.run.vm07.stdout:Removing ceph-base (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T13:53:00.407 INFO:teuthology.orchestra.run.vm00.stdout:Removing radosgw (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T13:53:00.526 INFO:teuthology.orchestra.run.vm07.stdout:Removing radosgw (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T13:53:00.563 INFO:teuthology.orchestra.run.vm08.stdout:Removing radosgw (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T13:53:00.816 INFO:teuthology.orchestra.run.vm00.stdout:Removing ceph-test (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T13:53:00.854 INFO:teuthology.orchestra.run.vm00.stdout:Removing ceph-common (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T13:53:00.951 INFO:teuthology.orchestra.run.vm07.stdout:Removing ceph-test (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T13:53:00.976 INFO:teuthology.orchestra.run.vm08.stdout:Removing ceph-test (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T13:53:00.989 INFO:teuthology.orchestra.run.vm07.stdout:Removing ceph-common (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T13:53:01.014 INFO:teuthology.orchestra.run.vm08.stdout:Removing ceph-common (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T13:53:01.293 INFO:teuthology.orchestra.run.vm00.stdout:Processing triggers for man-db (2.10.2-1) ... 2026-03-10T13:53:01.390 INFO:teuthology.orchestra.run.vm00.stdout:Processing triggers for libc-bin (2.35-0ubuntu3.13) ... 2026-03-10T13:53:01.447 INFO:teuthology.orchestra.run.vm07.stdout:Processing triggers for man-db (2.10.2-1) ... 2026-03-10T13:53:01.453 INFO:teuthology.orchestra.run.vm08.stdout:Processing triggers for man-db (2.10.2-1) ... 2026-03-10T13:53:01.466 INFO:teuthology.orchestra.run.vm00.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 117455 files and directories currently installed.) 2026-03-10T13:53:01.468 INFO:teuthology.orchestra.run.vm00.stdout:Purging configuration files for radosgw (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T13:53:01.483 INFO:teuthology.orchestra.run.vm07.stdout:Processing triggers for libc-bin (2.35-0ubuntu3.13) ... 2026-03-10T13:53:01.487 INFO:teuthology.orchestra.run.vm08.stdout:Processing triggers for libc-bin (2.35-0ubuntu3.13) ... 2026-03-10T13:53:01.562 INFO:teuthology.orchestra.run.vm07.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 117455 files and directories currently installed.) 2026-03-10T13:53:01.564 INFO:teuthology.orchestra.run.vm07.stdout:Purging configuration files for radosgw (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T13:53:01.567 INFO:teuthology.orchestra.run.vm08.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 117455 files and directories currently installed.) 2026-03-10T13:53:01.569 INFO:teuthology.orchestra.run.vm08.stdout:Purging configuration files for radosgw (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T13:53:02.015 INFO:teuthology.orchestra.run.vm00.stdout:Purging configuration files for ceph-mon (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T13:53:02.148 INFO:teuthology.orchestra.run.vm07.stdout:Purging configuration files for ceph-mon (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T13:53:02.192 INFO:teuthology.orchestra.run.vm08.stdout:Purging configuration files for ceph-mon (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T13:53:02.404 INFO:teuthology.orchestra.run.vm00.stdout:Purging configuration files for ceph-base (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T13:53:02.581 INFO:teuthology.orchestra.run.vm07.stdout:Purging configuration files for ceph-base (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T13:53:02.600 INFO:teuthology.orchestra.run.vm08.stdout:Purging configuration files for ceph-base (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T13:53:02.815 INFO:teuthology.orchestra.run.vm00.stdout:Purging configuration files for ceph-common (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T13:53:02.995 INFO:teuthology.orchestra.run.vm08.stdout:Purging configuration files for ceph-common (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T13:53:03.023 INFO:teuthology.orchestra.run.vm07.stdout:Purging configuration files for ceph-common (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T13:53:03.234 INFO:teuthology.orchestra.run.vm00.stdout:Purging configuration files for ceph-osd (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T13:53:03.417 INFO:teuthology.orchestra.run.vm08.stdout:Purging configuration files for ceph-osd (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T13:53:03.428 INFO:teuthology.orchestra.run.vm07.stdout:Purging configuration files for ceph-osd (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T13:53:04.644 INFO:teuthology.orchestra.run.vm00.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-10T13:53:04.679 INFO:teuthology.orchestra.run.vm00.stdout:Reading package lists... 2026-03-10T13:53:04.783 INFO:teuthology.orchestra.run.vm08.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-10T13:53:04.818 INFO:teuthology.orchestra.run.vm08.stdout:Reading package lists... 2026-03-10T13:53:04.884 INFO:teuthology.orchestra.run.vm00.stdout:Building dependency tree... 2026-03-10T13:53:04.884 INFO:teuthology.orchestra.run.vm00.stdout:Reading state information... 2026-03-10T13:53:04.923 INFO:teuthology.orchestra.run.vm07.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-10T13:53:04.958 INFO:teuthology.orchestra.run.vm07.stdout:Reading package lists... 2026-03-10T13:53:05.009 INFO:teuthology.orchestra.run.vm08.stdout:Building dependency tree... 2026-03-10T13:53:05.010 INFO:teuthology.orchestra.run.vm08.stdout:Reading state information... 2026-03-10T13:53:05.090 INFO:teuthology.orchestra.run.vm00.stdout:The following packages were automatically installed and are no longer required: 2026-03-10T13:53:05.090 INFO:teuthology.orchestra.run.vm00.stdout: ceph-mgr-modules-core jq kpartx libboost-iostreams1.74.0 2026-03-10T13:53:05.090 INFO:teuthology.orchestra.run.vm00.stdout: libboost-thread1.74.0 libjq1 liboath0 libonig5 libpmemobj1 libradosstriper1 2026-03-10T13:53:05.091 INFO:teuthology.orchestra.run.vm00.stdout: libsgutils2-2 libsqlite3-mod-ceph nvme-cli python-asyncssh-doc 2026-03-10T13:53:05.091 INFO:teuthology.orchestra.run.vm00.stdout: python-pastedeploy-tpl python3-asyncssh python3-cachetools 2026-03-10T13:53:05.091 INFO:teuthology.orchestra.run.vm00.stdout: python3-ceph-common python3-cheroot python3-cherrypy3 python3-google-auth 2026-03-10T13:53:05.091 INFO:teuthology.orchestra.run.vm00.stdout: python3-jaraco.classes python3-jaraco.collections python3-jaraco.functools 2026-03-10T13:53:05.091 INFO:teuthology.orchestra.run.vm00.stdout: python3-jaraco.text python3-joblib python3-kubernetes python3-logutils 2026-03-10T13:53:05.091 INFO:teuthology.orchestra.run.vm00.stdout: python3-mako python3-natsort python3-paste python3-pastedeploy 2026-03-10T13:53:05.091 INFO:teuthology.orchestra.run.vm00.stdout: python3-pastescript python3-pecan python3-portend python3-prettytable 2026-03-10T13:53:05.091 INFO:teuthology.orchestra.run.vm00.stdout: python3-psutil python3-pyinotify python3-repoze.lru 2026-03-10T13:53:05.091 INFO:teuthology.orchestra.run.vm00.stdout: python3-requests-oauthlib python3-routes python3-rsa python3-simplegeneric 2026-03-10T13:53:05.091 INFO:teuthology.orchestra.run.vm00.stdout: python3-simplejson python3-singledispatch python3-sklearn 2026-03-10T13:53:05.091 INFO:teuthology.orchestra.run.vm00.stdout: python3-sklearn-lib python3-tempita python3-tempora python3-threadpoolctl 2026-03-10T13:53:05.091 INFO:teuthology.orchestra.run.vm00.stdout: python3-waitress python3-wcwidth python3-webob python3-websocket 2026-03-10T13:53:05.091 INFO:teuthology.orchestra.run.vm00.stdout: python3-webtest python3-werkzeug python3-zc.lockfile sg3-utils 2026-03-10T13:53:05.091 INFO:teuthology.orchestra.run.vm00.stdout: sg3-utils-udev smartmontools socat xmlstarlet 2026-03-10T13:53:05.092 INFO:teuthology.orchestra.run.vm00.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-10T13:53:05.106 INFO:teuthology.orchestra.run.vm00.stdout:The following packages will be REMOVED: 2026-03-10T13:53:05.108 INFO:teuthology.orchestra.run.vm00.stdout: ceph-fuse* 2026-03-10T13:53:05.170 INFO:teuthology.orchestra.run.vm07.stdout:Building dependency tree... 2026-03-10T13:53:05.171 INFO:teuthology.orchestra.run.vm07.stdout:Reading state information... 2026-03-10T13:53:05.186 INFO:teuthology.orchestra.run.vm08.stdout:The following packages were automatically installed and are no longer required: 2026-03-10T13:53:05.187 INFO:teuthology.orchestra.run.vm08.stdout: ceph-mgr-modules-core jq kpartx libboost-iostreams1.74.0 2026-03-10T13:53:05.187 INFO:teuthology.orchestra.run.vm08.stdout: libboost-thread1.74.0 libjq1 liboath0 libonig5 libpmemobj1 libradosstriper1 2026-03-10T13:53:05.187 INFO:teuthology.orchestra.run.vm08.stdout: libsgutils2-2 libsqlite3-mod-ceph nvme-cli python-asyncssh-doc 2026-03-10T13:53:05.187 INFO:teuthology.orchestra.run.vm08.stdout: python-pastedeploy-tpl python3-asyncssh python3-cachetools 2026-03-10T13:53:05.188 INFO:teuthology.orchestra.run.vm08.stdout: python3-ceph-common python3-cheroot python3-cherrypy3 python3-google-auth 2026-03-10T13:53:05.188 INFO:teuthology.orchestra.run.vm08.stdout: python3-jaraco.classes python3-jaraco.collections python3-jaraco.functools 2026-03-10T13:53:05.188 INFO:teuthology.orchestra.run.vm08.stdout: python3-jaraco.text python3-joblib python3-kubernetes python3-logutils 2026-03-10T13:53:05.188 INFO:teuthology.orchestra.run.vm08.stdout: python3-mako python3-natsort python3-paste python3-pastedeploy 2026-03-10T13:53:05.188 INFO:teuthology.orchestra.run.vm08.stdout: python3-pastescript python3-pecan python3-portend python3-prettytable 2026-03-10T13:53:05.188 INFO:teuthology.orchestra.run.vm08.stdout: python3-psutil python3-pyinotify python3-repoze.lru 2026-03-10T13:53:05.188 INFO:teuthology.orchestra.run.vm08.stdout: python3-requests-oauthlib python3-routes python3-rsa python3-simplegeneric 2026-03-10T13:53:05.188 INFO:teuthology.orchestra.run.vm08.stdout: python3-simplejson python3-singledispatch python3-sklearn 2026-03-10T13:53:05.188 INFO:teuthology.orchestra.run.vm08.stdout: python3-sklearn-lib python3-tempita python3-tempora python3-threadpoolctl 2026-03-10T13:53:05.188 INFO:teuthology.orchestra.run.vm08.stdout: python3-waitress python3-wcwidth python3-webob python3-websocket 2026-03-10T13:53:05.188 INFO:teuthology.orchestra.run.vm08.stdout: python3-webtest python3-werkzeug python3-zc.lockfile sg3-utils 2026-03-10T13:53:05.188 INFO:teuthology.orchestra.run.vm08.stdout: sg3-utils-udev smartmontools socat xmlstarlet 2026-03-10T13:53:05.188 INFO:teuthology.orchestra.run.vm08.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-10T13:53:05.203 INFO:teuthology.orchestra.run.vm08.stdout:The following packages will be REMOVED: 2026-03-10T13:53:05.204 INFO:teuthology.orchestra.run.vm08.stdout: ceph-fuse* 2026-03-10T13:53:05.292 INFO:teuthology.orchestra.run.vm00.stdout:0 upgraded, 0 newly installed, 1 to remove and 12 not upgraded. 2026-03-10T13:53:05.292 INFO:teuthology.orchestra.run.vm00.stdout:After this operation, 3673 kB disk space will be freed. 2026-03-10T13:53:05.331 INFO:teuthology.orchestra.run.vm00.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 117443 files and directories currently installed.) 2026-03-10T13:53:05.333 INFO:teuthology.orchestra.run.vm00.stdout:Removing ceph-fuse (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T13:53:05.364 INFO:teuthology.orchestra.run.vm07.stdout:The following packages were automatically installed and are no longer required: 2026-03-10T13:53:05.365 INFO:teuthology.orchestra.run.vm07.stdout: ceph-mgr-modules-core jq kpartx libboost-iostreams1.74.0 2026-03-10T13:53:05.365 INFO:teuthology.orchestra.run.vm07.stdout: libboost-thread1.74.0 libjq1 liboath0 libonig5 libpmemobj1 libradosstriper1 2026-03-10T13:53:05.365 INFO:teuthology.orchestra.run.vm07.stdout: libsgutils2-2 libsqlite3-mod-ceph nvme-cli python-asyncssh-doc 2026-03-10T13:53:05.365 INFO:teuthology.orchestra.run.vm07.stdout: python-pastedeploy-tpl python3-asyncssh python3-cachetools 2026-03-10T13:53:05.365 INFO:teuthology.orchestra.run.vm07.stdout: python3-ceph-common python3-cheroot python3-cherrypy3 python3-google-auth 2026-03-10T13:53:05.366 INFO:teuthology.orchestra.run.vm07.stdout: python3-jaraco.classes python3-jaraco.collections python3-jaraco.functools 2026-03-10T13:53:05.366 INFO:teuthology.orchestra.run.vm07.stdout: python3-jaraco.text python3-joblib python3-kubernetes python3-logutils 2026-03-10T13:53:05.366 INFO:teuthology.orchestra.run.vm07.stdout: python3-mako python3-natsort python3-paste python3-pastedeploy 2026-03-10T13:53:05.366 INFO:teuthology.orchestra.run.vm07.stdout: python3-pastescript python3-pecan python3-portend python3-prettytable 2026-03-10T13:53:05.366 INFO:teuthology.orchestra.run.vm07.stdout: python3-psutil python3-pyinotify python3-repoze.lru 2026-03-10T13:53:05.366 INFO:teuthology.orchestra.run.vm07.stdout: python3-requests-oauthlib python3-routes python3-rsa python3-simplegeneric 2026-03-10T13:53:05.366 INFO:teuthology.orchestra.run.vm07.stdout: python3-simplejson python3-singledispatch python3-sklearn 2026-03-10T13:53:05.366 INFO:teuthology.orchestra.run.vm07.stdout: python3-sklearn-lib python3-tempita python3-tempora python3-threadpoolctl 2026-03-10T13:53:05.366 INFO:teuthology.orchestra.run.vm07.stdout: python3-waitress python3-wcwidth python3-webob python3-websocket 2026-03-10T13:53:05.366 INFO:teuthology.orchestra.run.vm07.stdout: python3-webtest python3-werkzeug python3-zc.lockfile sg3-utils 2026-03-10T13:53:05.366 INFO:teuthology.orchestra.run.vm07.stdout: sg3-utils-udev smartmontools socat xmlstarlet 2026-03-10T13:53:05.366 INFO:teuthology.orchestra.run.vm07.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-10T13:53:05.379 INFO:teuthology.orchestra.run.vm07.stdout:The following packages will be REMOVED: 2026-03-10T13:53:05.380 INFO:teuthology.orchestra.run.vm07.stdout: ceph-fuse* 2026-03-10T13:53:05.386 INFO:teuthology.orchestra.run.vm08.stdout:0 upgraded, 0 newly installed, 1 to remove and 12 not upgraded. 2026-03-10T13:53:05.386 INFO:teuthology.orchestra.run.vm08.stdout:After this operation, 3673 kB disk space will be freed. 2026-03-10T13:53:05.423 INFO:teuthology.orchestra.run.vm08.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 117443 files and directories currently installed.) 2026-03-10T13:53:05.424 INFO:teuthology.orchestra.run.vm08.stdout:Removing ceph-fuse (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T13:53:05.561 INFO:teuthology.orchestra.run.vm07.stdout:0 upgraded, 0 newly installed, 1 to remove and 12 not upgraded. 2026-03-10T13:53:05.561 INFO:teuthology.orchestra.run.vm07.stdout:After this operation, 3673 kB disk space will be freed. 2026-03-10T13:53:05.603 INFO:teuthology.orchestra.run.vm07.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 117443 files and directories currently installed.) 2026-03-10T13:53:05.605 INFO:teuthology.orchestra.run.vm07.stdout:Removing ceph-fuse (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T13:53:05.745 INFO:teuthology.orchestra.run.vm00.stdout:Processing triggers for man-db (2.10.2-1) ... 2026-03-10T13:53:05.812 INFO:teuthology.orchestra.run.vm08.stdout:Processing triggers for man-db (2.10.2-1) ... 2026-03-10T13:53:05.845 INFO:teuthology.orchestra.run.vm00.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 117434 files and directories currently installed.) 2026-03-10T13:53:05.847 INFO:teuthology.orchestra.run.vm00.stdout:Purging configuration files for ceph-fuse (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T13:53:05.902 INFO:teuthology.orchestra.run.vm08.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 117434 files and directories currently installed.) 2026-03-10T13:53:05.905 INFO:teuthology.orchestra.run.vm08.stdout:Purging configuration files for ceph-fuse (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T13:53:05.995 INFO:teuthology.orchestra.run.vm07.stdout:Processing triggers for man-db (2.10.2-1) ... 2026-03-10T13:53:06.096 INFO:teuthology.orchestra.run.vm07.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 117434 files and directories currently installed.) 2026-03-10T13:53:06.098 INFO:teuthology.orchestra.run.vm07.stdout:Purging configuration files for ceph-fuse (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T13:53:07.223 INFO:teuthology.orchestra.run.vm00.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-10T13:53:07.262 INFO:teuthology.orchestra.run.vm00.stdout:Reading package lists... 2026-03-10T13:53:07.433 INFO:teuthology.orchestra.run.vm08.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-10T13:53:07.468 INFO:teuthology.orchestra.run.vm08.stdout:Reading package lists... 2026-03-10T13:53:07.483 INFO:teuthology.orchestra.run.vm00.stdout:Building dependency tree... 2026-03-10T13:53:07.483 INFO:teuthology.orchestra.run.vm00.stdout:Reading state information... 2026-03-10T13:53:07.664 INFO:teuthology.orchestra.run.vm00.stdout:Package 'ceph-test' is not installed, so not removed 2026-03-10T13:53:07.664 INFO:teuthology.orchestra.run.vm00.stdout:The following packages were automatically installed and are no longer required: 2026-03-10T13:53:07.664 INFO:teuthology.orchestra.run.vm00.stdout: ceph-mgr-modules-core jq kpartx libboost-iostreams1.74.0 2026-03-10T13:53:07.664 INFO:teuthology.orchestra.run.vm00.stdout: libboost-thread1.74.0 libjq1 liboath0 libonig5 libpmemobj1 libradosstriper1 2026-03-10T13:53:07.664 INFO:teuthology.orchestra.run.vm00.stdout: libsgutils2-2 libsqlite3-mod-ceph nvme-cli python-asyncssh-doc 2026-03-10T13:53:07.664 INFO:teuthology.orchestra.run.vm00.stdout: python-pastedeploy-tpl python3-asyncssh python3-cachetools 2026-03-10T13:53:07.664 INFO:teuthology.orchestra.run.vm00.stdout: python3-ceph-common python3-cheroot python3-cherrypy3 python3-google-auth 2026-03-10T13:53:07.664 INFO:teuthology.orchestra.run.vm00.stdout: python3-jaraco.classes python3-jaraco.collections python3-jaraco.functools 2026-03-10T13:53:07.664 INFO:teuthology.orchestra.run.vm00.stdout: python3-jaraco.text python3-joblib python3-kubernetes python3-logutils 2026-03-10T13:53:07.664 INFO:teuthology.orchestra.run.vm00.stdout: python3-mako python3-natsort python3-paste python3-pastedeploy 2026-03-10T13:53:07.664 INFO:teuthology.orchestra.run.vm00.stdout: python3-pastescript python3-pecan python3-portend python3-prettytable 2026-03-10T13:53:07.664 INFO:teuthology.orchestra.run.vm00.stdout: python3-psutil python3-pyinotify python3-repoze.lru 2026-03-10T13:53:07.664 INFO:teuthology.orchestra.run.vm00.stdout: python3-requests-oauthlib python3-routes python3-rsa python3-simplegeneric 2026-03-10T13:53:07.664 INFO:teuthology.orchestra.run.vm00.stdout: python3-simplejson python3-singledispatch python3-sklearn 2026-03-10T13:53:07.664 INFO:teuthology.orchestra.run.vm00.stdout: python3-sklearn-lib python3-tempita python3-tempora python3-threadpoolctl 2026-03-10T13:53:07.664 INFO:teuthology.orchestra.run.vm00.stdout: python3-waitress python3-wcwidth python3-webob python3-websocket 2026-03-10T13:53:07.664 INFO:teuthology.orchestra.run.vm00.stdout: python3-webtest python3-werkzeug python3-zc.lockfile sg3-utils 2026-03-10T13:53:07.664 INFO:teuthology.orchestra.run.vm00.stdout: sg3-utils-udev smartmontools socat xmlstarlet 2026-03-10T13:53:07.664 INFO:teuthology.orchestra.run.vm00.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-10T13:53:07.676 INFO:teuthology.orchestra.run.vm07.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-10T13:53:07.678 INFO:teuthology.orchestra.run.vm00.stdout:0 upgraded, 0 newly installed, 0 to remove and 12 not upgraded. 2026-03-10T13:53:07.678 INFO:teuthology.orchestra.run.vm00.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-10T13:53:07.696 INFO:teuthology.orchestra.run.vm08.stdout:Building dependency tree... 2026-03-10T13:53:07.697 INFO:teuthology.orchestra.run.vm08.stdout:Reading state information... 2026-03-10T13:53:07.712 INFO:teuthology.orchestra.run.vm00.stdout:Reading package lists... 2026-03-10T13:53:07.721 INFO:teuthology.orchestra.run.vm07.stdout:Reading package lists... 2026-03-10T13:53:07.836 INFO:teuthology.orchestra.run.vm00.stdout:Building dependency tree... 2026-03-10T13:53:07.837 INFO:teuthology.orchestra.run.vm00.stdout:Reading state information... 2026-03-10T13:53:07.930 INFO:teuthology.orchestra.run.vm00.stdout:Package 'ceph-volume' is not installed, so not removed 2026-03-10T13:53:07.930 INFO:teuthology.orchestra.run.vm00.stdout:The following packages were automatically installed and are no longer required: 2026-03-10T13:53:07.930 INFO:teuthology.orchestra.run.vm00.stdout: ceph-mgr-modules-core jq kpartx libboost-iostreams1.74.0 2026-03-10T13:53:07.930 INFO:teuthology.orchestra.run.vm00.stdout: libboost-thread1.74.0 libjq1 liboath0 libonig5 libpmemobj1 libradosstriper1 2026-03-10T13:53:07.930 INFO:teuthology.orchestra.run.vm00.stdout: libsgutils2-2 libsqlite3-mod-ceph nvme-cli python-asyncssh-doc 2026-03-10T13:53:07.930 INFO:teuthology.orchestra.run.vm00.stdout: python-pastedeploy-tpl python3-asyncssh python3-cachetools 2026-03-10T13:53:07.930 INFO:teuthology.orchestra.run.vm00.stdout: python3-ceph-common python3-cheroot python3-cherrypy3 python3-google-auth 2026-03-10T13:53:07.930 INFO:teuthology.orchestra.run.vm00.stdout: python3-jaraco.classes python3-jaraco.collections python3-jaraco.functools 2026-03-10T13:53:07.930 INFO:teuthology.orchestra.run.vm00.stdout: python3-jaraco.text python3-joblib python3-kubernetes python3-logutils 2026-03-10T13:53:07.930 INFO:teuthology.orchestra.run.vm00.stdout: python3-mako python3-natsort python3-paste python3-pastedeploy 2026-03-10T13:53:07.930 INFO:teuthology.orchestra.run.vm00.stdout: python3-pastescript python3-pecan python3-portend python3-prettytable 2026-03-10T13:53:07.930 INFO:teuthology.orchestra.run.vm00.stdout: python3-psutil python3-pyinotify python3-repoze.lru 2026-03-10T13:53:07.930 INFO:teuthology.orchestra.run.vm00.stdout: python3-requests-oauthlib python3-routes python3-rsa python3-simplegeneric 2026-03-10T13:53:07.930 INFO:teuthology.orchestra.run.vm00.stdout: python3-simplejson python3-singledispatch python3-sklearn 2026-03-10T13:53:07.930 INFO:teuthology.orchestra.run.vm00.stdout: python3-sklearn-lib python3-tempita python3-tempora python3-threadpoolctl 2026-03-10T13:53:07.930 INFO:teuthology.orchestra.run.vm00.stdout: python3-waitress python3-wcwidth python3-webob python3-websocket 2026-03-10T13:53:07.930 INFO:teuthology.orchestra.run.vm00.stdout: python3-webtest python3-werkzeug python3-zc.lockfile sg3-utils 2026-03-10T13:53:07.930 INFO:teuthology.orchestra.run.vm00.stdout: sg3-utils-udev smartmontools socat xmlstarlet 2026-03-10T13:53:07.930 INFO:teuthology.orchestra.run.vm00.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-10T13:53:07.938 INFO:teuthology.orchestra.run.vm07.stdout:Building dependency tree... 2026-03-10T13:53:07.939 INFO:teuthology.orchestra.run.vm07.stdout:Reading state information... 2026-03-10T13:53:07.944 INFO:teuthology.orchestra.run.vm00.stdout:0 upgraded, 0 newly installed, 0 to remove and 12 not upgraded. 2026-03-10T13:53:07.944 INFO:teuthology.orchestra.run.vm00.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-10T13:53:07.949 INFO:teuthology.orchestra.run.vm08.stdout:Package 'ceph-test' is not installed, so not removed 2026-03-10T13:53:07.949 INFO:teuthology.orchestra.run.vm08.stdout:The following packages were automatically installed and are no longer required: 2026-03-10T13:53:07.949 INFO:teuthology.orchestra.run.vm08.stdout: ceph-mgr-modules-core jq kpartx libboost-iostreams1.74.0 2026-03-10T13:53:07.950 INFO:teuthology.orchestra.run.vm08.stdout: libboost-thread1.74.0 libjq1 liboath0 libonig5 libpmemobj1 libradosstriper1 2026-03-10T13:53:07.950 INFO:teuthology.orchestra.run.vm08.stdout: libsgutils2-2 libsqlite3-mod-ceph nvme-cli python-asyncssh-doc 2026-03-10T13:53:07.950 INFO:teuthology.orchestra.run.vm08.stdout: python-pastedeploy-tpl python3-asyncssh python3-cachetools 2026-03-10T13:53:07.950 INFO:teuthology.orchestra.run.vm08.stdout: python3-ceph-common python3-cheroot python3-cherrypy3 python3-google-auth 2026-03-10T13:53:07.950 INFO:teuthology.orchestra.run.vm08.stdout: python3-jaraco.classes python3-jaraco.collections python3-jaraco.functools 2026-03-10T13:53:07.950 INFO:teuthology.orchestra.run.vm08.stdout: python3-jaraco.text python3-joblib python3-kubernetes python3-logutils 2026-03-10T13:53:07.950 INFO:teuthology.orchestra.run.vm08.stdout: python3-mako python3-natsort python3-paste python3-pastedeploy 2026-03-10T13:53:07.950 INFO:teuthology.orchestra.run.vm08.stdout: python3-pastescript python3-pecan python3-portend python3-prettytable 2026-03-10T13:53:07.950 INFO:teuthology.orchestra.run.vm08.stdout: python3-psutil python3-pyinotify python3-repoze.lru 2026-03-10T13:53:07.950 INFO:teuthology.orchestra.run.vm08.stdout: python3-requests-oauthlib python3-routes python3-rsa python3-simplegeneric 2026-03-10T13:53:07.950 INFO:teuthology.orchestra.run.vm08.stdout: python3-simplejson python3-singledispatch python3-sklearn 2026-03-10T13:53:07.950 INFO:teuthology.orchestra.run.vm08.stdout: python3-sklearn-lib python3-tempita python3-tempora python3-threadpoolctl 2026-03-10T13:53:07.950 INFO:teuthology.orchestra.run.vm08.stdout: python3-waitress python3-wcwidth python3-webob python3-websocket 2026-03-10T13:53:07.950 INFO:teuthology.orchestra.run.vm08.stdout: python3-webtest python3-werkzeug python3-zc.lockfile sg3-utils 2026-03-10T13:53:07.950 INFO:teuthology.orchestra.run.vm08.stdout: sg3-utils-udev smartmontools socat xmlstarlet 2026-03-10T13:53:07.950 INFO:teuthology.orchestra.run.vm08.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-10T13:53:07.965 INFO:teuthology.orchestra.run.vm08.stdout:0 upgraded, 0 newly installed, 0 to remove and 12 not upgraded. 2026-03-10T13:53:07.965 INFO:teuthology.orchestra.run.vm08.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-10T13:53:07.979 INFO:teuthology.orchestra.run.vm00.stdout:Reading package lists... 2026-03-10T13:53:07.999 INFO:teuthology.orchestra.run.vm08.stdout:Reading package lists... 2026-03-10T13:53:08.097 INFO:teuthology.orchestra.run.vm00.stdout:Building dependency tree... 2026-03-10T13:53:08.098 INFO:teuthology.orchestra.run.vm00.stdout:Reading state information... 2026-03-10T13:53:08.122 INFO:teuthology.orchestra.run.vm07.stdout:Package 'ceph-test' is not installed, so not removed 2026-03-10T13:53:08.122 INFO:teuthology.orchestra.run.vm07.stdout:The following packages were automatically installed and are no longer required: 2026-03-10T13:53:08.122 INFO:teuthology.orchestra.run.vm07.stdout: ceph-mgr-modules-core jq kpartx libboost-iostreams1.74.0 2026-03-10T13:53:08.123 INFO:teuthology.orchestra.run.vm07.stdout: libboost-thread1.74.0 libjq1 liboath0 libonig5 libpmemobj1 libradosstriper1 2026-03-10T13:53:08.123 INFO:teuthology.orchestra.run.vm07.stdout: libsgutils2-2 libsqlite3-mod-ceph nvme-cli python-asyncssh-doc 2026-03-10T13:53:08.123 INFO:teuthology.orchestra.run.vm07.stdout: python-pastedeploy-tpl python3-asyncssh python3-cachetools 2026-03-10T13:53:08.123 INFO:teuthology.orchestra.run.vm07.stdout: python3-ceph-common python3-cheroot python3-cherrypy3 python3-google-auth 2026-03-10T13:53:08.123 INFO:teuthology.orchestra.run.vm07.stdout: python3-jaraco.classes python3-jaraco.collections python3-jaraco.functools 2026-03-10T13:53:08.123 INFO:teuthology.orchestra.run.vm07.stdout: python3-jaraco.text python3-joblib python3-kubernetes python3-logutils 2026-03-10T13:53:08.123 INFO:teuthology.orchestra.run.vm07.stdout: python3-mako python3-natsort python3-paste python3-pastedeploy 2026-03-10T13:53:08.123 INFO:teuthology.orchestra.run.vm07.stdout: python3-pastescript python3-pecan python3-portend python3-prettytable 2026-03-10T13:53:08.123 INFO:teuthology.orchestra.run.vm07.stdout: python3-psutil python3-pyinotify python3-repoze.lru 2026-03-10T13:53:08.123 INFO:teuthology.orchestra.run.vm07.stdout: python3-requests-oauthlib python3-routes python3-rsa python3-simplegeneric 2026-03-10T13:53:08.123 INFO:teuthology.orchestra.run.vm07.stdout: python3-simplejson python3-singledispatch python3-sklearn 2026-03-10T13:53:08.123 INFO:teuthology.orchestra.run.vm07.stdout: python3-sklearn-lib python3-tempita python3-tempora python3-threadpoolctl 2026-03-10T13:53:08.123 INFO:teuthology.orchestra.run.vm07.stdout: python3-waitress python3-wcwidth python3-webob python3-websocket 2026-03-10T13:53:08.124 INFO:teuthology.orchestra.run.vm07.stdout: python3-webtest python3-werkzeug python3-zc.lockfile sg3-utils 2026-03-10T13:53:08.124 INFO:teuthology.orchestra.run.vm07.stdout: sg3-utils-udev smartmontools socat xmlstarlet 2026-03-10T13:53:08.124 INFO:teuthology.orchestra.run.vm07.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-10T13:53:08.151 INFO:teuthology.orchestra.run.vm07.stdout:0 upgraded, 0 newly installed, 0 to remove and 12 not upgraded. 2026-03-10T13:53:08.151 INFO:teuthology.orchestra.run.vm07.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-10T13:53:08.185 INFO:teuthology.orchestra.run.vm07.stdout:Reading package lists... 2026-03-10T13:53:08.194 INFO:teuthology.orchestra.run.vm08.stdout:Building dependency tree... 2026-03-10T13:53:08.195 INFO:teuthology.orchestra.run.vm08.stdout:Reading state information... 2026-03-10T13:53:08.292 INFO:teuthology.orchestra.run.vm00.stdout:Package 'radosgw' is not installed, so not removed 2026-03-10T13:53:08.292 INFO:teuthology.orchestra.run.vm00.stdout:The following packages were automatically installed and are no longer required: 2026-03-10T13:53:08.292 INFO:teuthology.orchestra.run.vm00.stdout: ceph-mgr-modules-core jq kpartx libboost-iostreams1.74.0 2026-03-10T13:53:08.292 INFO:teuthology.orchestra.run.vm00.stdout: libboost-thread1.74.0 libjq1 liboath0 libonig5 libpmemobj1 libradosstriper1 2026-03-10T13:53:08.293 INFO:teuthology.orchestra.run.vm00.stdout: libsgutils2-2 libsqlite3-mod-ceph nvme-cli python-asyncssh-doc 2026-03-10T13:53:08.293 INFO:teuthology.orchestra.run.vm00.stdout: python-pastedeploy-tpl python3-asyncssh python3-cachetools 2026-03-10T13:53:08.293 INFO:teuthology.orchestra.run.vm00.stdout: python3-ceph-common python3-cheroot python3-cherrypy3 python3-google-auth 2026-03-10T13:53:08.294 INFO:teuthology.orchestra.run.vm00.stdout: python3-jaraco.classes python3-jaraco.collections python3-jaraco.functools 2026-03-10T13:53:08.294 INFO:teuthology.orchestra.run.vm00.stdout: python3-jaraco.text python3-joblib python3-kubernetes python3-logutils 2026-03-10T13:53:08.294 INFO:teuthology.orchestra.run.vm00.stdout: python3-mako python3-natsort python3-paste python3-pastedeploy 2026-03-10T13:53:08.294 INFO:teuthology.orchestra.run.vm00.stdout: python3-pastescript python3-pecan python3-portend python3-prettytable 2026-03-10T13:53:08.294 INFO:teuthology.orchestra.run.vm00.stdout: python3-psutil python3-pyinotify python3-repoze.lru 2026-03-10T13:53:08.294 INFO:teuthology.orchestra.run.vm00.stdout: python3-requests-oauthlib python3-routes python3-rsa python3-simplegeneric 2026-03-10T13:53:08.294 INFO:teuthology.orchestra.run.vm00.stdout: python3-simplejson python3-singledispatch python3-sklearn 2026-03-10T13:53:08.294 INFO:teuthology.orchestra.run.vm00.stdout: python3-sklearn-lib python3-tempita python3-tempora python3-threadpoolctl 2026-03-10T13:53:08.294 INFO:teuthology.orchestra.run.vm00.stdout: python3-waitress python3-wcwidth python3-webob python3-websocket 2026-03-10T13:53:08.294 INFO:teuthology.orchestra.run.vm00.stdout: python3-webtest python3-werkzeug python3-zc.lockfile sg3-utils 2026-03-10T13:53:08.294 INFO:teuthology.orchestra.run.vm00.stdout: sg3-utils-udev smartmontools socat xmlstarlet 2026-03-10T13:53:08.294 INFO:teuthology.orchestra.run.vm00.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-10T13:53:08.324 INFO:teuthology.orchestra.run.vm00.stdout:0 upgraded, 0 newly installed, 0 to remove and 12 not upgraded. 2026-03-10T13:53:08.324 INFO:teuthology.orchestra.run.vm00.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-10T13:53:08.360 INFO:teuthology.orchestra.run.vm00.stdout:Reading package lists... 2026-03-10T13:53:08.413 INFO:teuthology.orchestra.run.vm07.stdout:Building dependency tree... 2026-03-10T13:53:08.414 INFO:teuthology.orchestra.run.vm07.stdout:Reading state information... 2026-03-10T13:53:08.461 INFO:teuthology.orchestra.run.vm08.stdout:Package 'ceph-volume' is not installed, so not removed 2026-03-10T13:53:08.461 INFO:teuthology.orchestra.run.vm08.stdout:The following packages were automatically installed and are no longer required: 2026-03-10T13:53:08.461 INFO:teuthology.orchestra.run.vm08.stdout: ceph-mgr-modules-core jq kpartx libboost-iostreams1.74.0 2026-03-10T13:53:08.462 INFO:teuthology.orchestra.run.vm08.stdout: libboost-thread1.74.0 libjq1 liboath0 libonig5 libpmemobj1 libradosstriper1 2026-03-10T13:53:08.463 INFO:teuthology.orchestra.run.vm08.stdout: libsgutils2-2 libsqlite3-mod-ceph nvme-cli python-asyncssh-doc 2026-03-10T13:53:08.463 INFO:teuthology.orchestra.run.vm08.stdout: python-pastedeploy-tpl python3-asyncssh python3-cachetools 2026-03-10T13:53:08.463 INFO:teuthology.orchestra.run.vm08.stdout: python3-ceph-common python3-cheroot python3-cherrypy3 python3-google-auth 2026-03-10T13:53:08.463 INFO:teuthology.orchestra.run.vm08.stdout: python3-jaraco.classes python3-jaraco.collections python3-jaraco.functools 2026-03-10T13:53:08.463 INFO:teuthology.orchestra.run.vm08.stdout: python3-jaraco.text python3-joblib python3-kubernetes python3-logutils 2026-03-10T13:53:08.463 INFO:teuthology.orchestra.run.vm08.stdout: python3-mako python3-natsort python3-paste python3-pastedeploy 2026-03-10T13:53:08.463 INFO:teuthology.orchestra.run.vm08.stdout: python3-pastescript python3-pecan python3-portend python3-prettytable 2026-03-10T13:53:08.463 INFO:teuthology.orchestra.run.vm08.stdout: python3-psutil python3-pyinotify python3-repoze.lru 2026-03-10T13:53:08.463 INFO:teuthology.orchestra.run.vm08.stdout: python3-requests-oauthlib python3-routes python3-rsa python3-simplegeneric 2026-03-10T13:53:08.463 INFO:teuthology.orchestra.run.vm08.stdout: python3-simplejson python3-singledispatch python3-sklearn 2026-03-10T13:53:08.463 INFO:teuthology.orchestra.run.vm08.stdout: python3-sklearn-lib python3-tempita python3-tempora python3-threadpoolctl 2026-03-10T13:53:08.463 INFO:teuthology.orchestra.run.vm08.stdout: python3-waitress python3-wcwidth python3-webob python3-websocket 2026-03-10T13:53:08.463 INFO:teuthology.orchestra.run.vm08.stdout: python3-webtest python3-werkzeug python3-zc.lockfile sg3-utils 2026-03-10T13:53:08.463 INFO:teuthology.orchestra.run.vm08.stdout: sg3-utils-udev smartmontools socat xmlstarlet 2026-03-10T13:53:08.463 INFO:teuthology.orchestra.run.vm08.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-10T13:53:08.496 INFO:teuthology.orchestra.run.vm08.stdout:0 upgraded, 0 newly installed, 0 to remove and 12 not upgraded. 2026-03-10T13:53:08.496 INFO:teuthology.orchestra.run.vm08.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-10T13:53:08.530 INFO:teuthology.orchestra.run.vm08.stdout:Reading package lists... 2026-03-10T13:53:08.603 INFO:teuthology.orchestra.run.vm00.stdout:Building dependency tree... 2026-03-10T13:53:08.604 INFO:teuthology.orchestra.run.vm00.stdout:Reading state information... 2026-03-10T13:53:08.616 INFO:teuthology.orchestra.run.vm07.stdout:Package 'ceph-volume' is not installed, so not removed 2026-03-10T13:53:08.616 INFO:teuthology.orchestra.run.vm07.stdout:The following packages were automatically installed and are no longer required: 2026-03-10T13:53:08.616 INFO:teuthology.orchestra.run.vm07.stdout: ceph-mgr-modules-core jq kpartx libboost-iostreams1.74.0 2026-03-10T13:53:08.617 INFO:teuthology.orchestra.run.vm07.stdout: libboost-thread1.74.0 libjq1 liboath0 libonig5 libpmemobj1 libradosstriper1 2026-03-10T13:53:08.617 INFO:teuthology.orchestra.run.vm07.stdout: libsgutils2-2 libsqlite3-mod-ceph nvme-cli python-asyncssh-doc 2026-03-10T13:53:08.617 INFO:teuthology.orchestra.run.vm07.stdout: python-pastedeploy-tpl python3-asyncssh python3-cachetools 2026-03-10T13:53:08.617 INFO:teuthology.orchestra.run.vm07.stdout: python3-ceph-common python3-cheroot python3-cherrypy3 python3-google-auth 2026-03-10T13:53:08.617 INFO:teuthology.orchestra.run.vm07.stdout: python3-jaraco.classes python3-jaraco.collections python3-jaraco.functools 2026-03-10T13:53:08.617 INFO:teuthology.orchestra.run.vm07.stdout: python3-jaraco.text python3-joblib python3-kubernetes python3-logutils 2026-03-10T13:53:08.617 INFO:teuthology.orchestra.run.vm07.stdout: python3-mako python3-natsort python3-paste python3-pastedeploy 2026-03-10T13:53:08.617 INFO:teuthology.orchestra.run.vm07.stdout: python3-pastescript python3-pecan python3-portend python3-prettytable 2026-03-10T13:53:08.618 INFO:teuthology.orchestra.run.vm07.stdout: python3-psutil python3-pyinotify python3-repoze.lru 2026-03-10T13:53:08.618 INFO:teuthology.orchestra.run.vm07.stdout: python3-requests-oauthlib python3-routes python3-rsa python3-simplegeneric 2026-03-10T13:53:08.618 INFO:teuthology.orchestra.run.vm07.stdout: python3-simplejson python3-singledispatch python3-sklearn 2026-03-10T13:53:08.618 INFO:teuthology.orchestra.run.vm07.stdout: python3-sklearn-lib python3-tempita python3-tempora python3-threadpoolctl 2026-03-10T13:53:08.618 INFO:teuthology.orchestra.run.vm07.stdout: python3-waitress python3-wcwidth python3-webob python3-websocket 2026-03-10T13:53:08.618 INFO:teuthology.orchestra.run.vm07.stdout: python3-webtest python3-werkzeug python3-zc.lockfile sg3-utils 2026-03-10T13:53:08.618 INFO:teuthology.orchestra.run.vm07.stdout: sg3-utils-udev smartmontools socat xmlstarlet 2026-03-10T13:53:08.618 INFO:teuthology.orchestra.run.vm07.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-10T13:53:08.648 INFO:teuthology.orchestra.run.vm07.stdout:0 upgraded, 0 newly installed, 0 to remove and 12 not upgraded. 2026-03-10T13:53:08.648 INFO:teuthology.orchestra.run.vm07.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-10T13:53:08.682 INFO:teuthology.orchestra.run.vm07.stdout:Reading package lists... 2026-03-10T13:53:08.722 INFO:teuthology.orchestra.run.vm08.stdout:Building dependency tree... 2026-03-10T13:53:08.723 INFO:teuthology.orchestra.run.vm08.stdout:Reading state information... 2026-03-10T13:53:08.851 INFO:teuthology.orchestra.run.vm00.stdout:The following packages were automatically installed and are no longer required: 2026-03-10T13:53:08.852 INFO:teuthology.orchestra.run.vm00.stdout: ceph-mgr-modules-core jq kpartx libboost-iostreams1.74.0 2026-03-10T13:53:08.852 INFO:teuthology.orchestra.run.vm00.stdout: libboost-thread1.74.0 libjq1 liblua5.3-dev liboath0 libonig5 libpmemobj1 2026-03-10T13:53:08.852 INFO:teuthology.orchestra.run.vm00.stdout: libradosstriper1 librdkafka1 libreadline-dev librgw2 libsgutils2-2 2026-03-10T13:53:08.852 INFO:teuthology.orchestra.run.vm00.stdout: libsqlite3-mod-ceph lua-any lua-sec lua-socket lua5.1 luarocks nvme-cli 2026-03-10T13:53:08.852 INFO:teuthology.orchestra.run.vm00.stdout: pkg-config python-asyncssh-doc python-pastedeploy-tpl python3-asyncssh 2026-03-10T13:53:08.852 INFO:teuthology.orchestra.run.vm00.stdout: python3-cachetools python3-ceph-argparse python3-ceph-common python3-cheroot 2026-03-10T13:53:08.852 INFO:teuthology.orchestra.run.vm00.stdout: python3-cherrypy3 python3-google-auth python3-jaraco.classes 2026-03-10T13:53:08.852 INFO:teuthology.orchestra.run.vm00.stdout: python3-jaraco.collections python3-jaraco.functools python3-jaraco.text 2026-03-10T13:53:08.852 INFO:teuthology.orchestra.run.vm00.stdout: python3-joblib python3-kubernetes python3-logutils python3-mako 2026-03-10T13:53:08.852 INFO:teuthology.orchestra.run.vm00.stdout: python3-natsort python3-paste python3-pastedeploy python3-pastescript 2026-03-10T13:53:08.852 INFO:teuthology.orchestra.run.vm00.stdout: python3-pecan python3-portend python3-prettytable python3-psutil 2026-03-10T13:53:08.852 INFO:teuthology.orchestra.run.vm00.stdout: python3-pyinotify python3-repoze.lru python3-requests-oauthlib 2026-03-10T13:53:08.852 INFO:teuthology.orchestra.run.vm00.stdout: python3-routes python3-rsa python3-simplegeneric python3-simplejson 2026-03-10T13:53:08.852 INFO:teuthology.orchestra.run.vm00.stdout: python3-singledispatch python3-sklearn python3-sklearn-lib python3-tempita 2026-03-10T13:53:08.852 INFO:teuthology.orchestra.run.vm00.stdout: python3-tempora python3-threadpoolctl python3-waitress python3-wcwidth 2026-03-10T13:53:08.852 INFO:teuthology.orchestra.run.vm00.stdout: python3-webob python3-websocket python3-webtest python3-werkzeug 2026-03-10T13:53:08.853 INFO:teuthology.orchestra.run.vm00.stdout: python3-zc.lockfile sg3-utils sg3-utils-udev smartmontools socat unzip 2026-03-10T13:53:08.853 INFO:teuthology.orchestra.run.vm00.stdout: xmlstarlet zip 2026-03-10T13:53:08.853 INFO:teuthology.orchestra.run.vm00.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-10T13:53:08.869 INFO:teuthology.orchestra.run.vm00.stdout:The following packages will be REMOVED: 2026-03-10T13:53:08.869 INFO:teuthology.orchestra.run.vm00.stdout: python3-cephfs* python3-rados* python3-rgw* 2026-03-10T13:53:08.899 INFO:teuthology.orchestra.run.vm07.stdout:Building dependency tree... 2026-03-10T13:53:08.900 INFO:teuthology.orchestra.run.vm07.stdout:Reading state information... 2026-03-10T13:53:08.929 INFO:teuthology.orchestra.run.vm08.stdout:Package 'radosgw' is not installed, so not removed 2026-03-10T13:53:08.929 INFO:teuthology.orchestra.run.vm08.stdout:The following packages were automatically installed and are no longer required: 2026-03-10T13:53:08.929 INFO:teuthology.orchestra.run.vm08.stdout: ceph-mgr-modules-core jq kpartx libboost-iostreams1.74.0 2026-03-10T13:53:08.929 INFO:teuthology.orchestra.run.vm08.stdout: libboost-thread1.74.0 libjq1 liboath0 libonig5 libpmemobj1 libradosstriper1 2026-03-10T13:53:08.930 INFO:teuthology.orchestra.run.vm08.stdout: libsgutils2-2 libsqlite3-mod-ceph nvme-cli python-asyncssh-doc 2026-03-10T13:53:08.930 INFO:teuthology.orchestra.run.vm08.stdout: python-pastedeploy-tpl python3-asyncssh python3-cachetools 2026-03-10T13:53:08.930 INFO:teuthology.orchestra.run.vm08.stdout: python3-ceph-common python3-cheroot python3-cherrypy3 python3-google-auth 2026-03-10T13:53:08.930 INFO:teuthology.orchestra.run.vm08.stdout: python3-jaraco.classes python3-jaraco.collections python3-jaraco.functools 2026-03-10T13:53:08.930 INFO:teuthology.orchestra.run.vm08.stdout: python3-jaraco.text python3-joblib python3-kubernetes python3-logutils 2026-03-10T13:53:08.930 INFO:teuthology.orchestra.run.vm08.stdout: python3-mako python3-natsort python3-paste python3-pastedeploy 2026-03-10T13:53:08.930 INFO:teuthology.orchestra.run.vm08.stdout: python3-pastescript python3-pecan python3-portend python3-prettytable 2026-03-10T13:53:08.930 INFO:teuthology.orchestra.run.vm08.stdout: python3-psutil python3-pyinotify python3-repoze.lru 2026-03-10T13:53:08.930 INFO:teuthology.orchestra.run.vm08.stdout: python3-requests-oauthlib python3-routes python3-rsa python3-simplegeneric 2026-03-10T13:53:08.930 INFO:teuthology.orchestra.run.vm08.stdout: python3-simplejson python3-singledispatch python3-sklearn 2026-03-10T13:53:08.930 INFO:teuthology.orchestra.run.vm08.stdout: python3-sklearn-lib python3-tempita python3-tempora python3-threadpoolctl 2026-03-10T13:53:08.930 INFO:teuthology.orchestra.run.vm08.stdout: python3-waitress python3-wcwidth python3-webob python3-websocket 2026-03-10T13:53:08.930 INFO:teuthology.orchestra.run.vm08.stdout: python3-webtest python3-werkzeug python3-zc.lockfile sg3-utils 2026-03-10T13:53:08.930 INFO:teuthology.orchestra.run.vm08.stdout: sg3-utils-udev smartmontools socat xmlstarlet 2026-03-10T13:53:08.930 INFO:teuthology.orchestra.run.vm08.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-10T13:53:08.953 INFO:teuthology.orchestra.run.vm08.stdout:0 upgraded, 0 newly installed, 0 to remove and 12 not upgraded. 2026-03-10T13:53:08.953 INFO:teuthology.orchestra.run.vm08.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-10T13:53:08.987 INFO:teuthology.orchestra.run.vm08.stdout:Reading package lists... 2026-03-10T13:53:09.057 INFO:teuthology.orchestra.run.vm00.stdout:0 upgraded, 0 newly installed, 3 to remove and 12 not upgraded. 2026-03-10T13:53:09.058 INFO:teuthology.orchestra.run.vm00.stdout:After this operation, 2062 kB disk space will be freed. 2026-03-10T13:53:09.089 INFO:teuthology.orchestra.run.vm00.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 117434 files and directories currently installed.) 2026-03-10T13:53:09.090 INFO:teuthology.orchestra.run.vm00.stdout:Removing python3-cephfs (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T13:53:09.101 INFO:teuthology.orchestra.run.vm00.stdout:Removing python3-rgw (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T13:53:09.112 INFO:teuthology.orchestra.run.vm00.stdout:Removing python3-rados (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T13:53:09.144 INFO:teuthology.orchestra.run.vm07.stdout:Package 'radosgw' is not installed, so not removed 2026-03-10T13:53:09.144 INFO:teuthology.orchestra.run.vm07.stdout:The following packages were automatically installed and are no longer required: 2026-03-10T13:53:09.145 INFO:teuthology.orchestra.run.vm07.stdout: ceph-mgr-modules-core jq kpartx libboost-iostreams1.74.0 2026-03-10T13:53:09.145 INFO:teuthology.orchestra.run.vm07.stdout: libboost-thread1.74.0 libjq1 liboath0 libonig5 libpmemobj1 libradosstriper1 2026-03-10T13:53:09.146 INFO:teuthology.orchestra.run.vm07.stdout: libsgutils2-2 libsqlite3-mod-ceph nvme-cli python-asyncssh-doc 2026-03-10T13:53:09.146 INFO:teuthology.orchestra.run.vm07.stdout: python-pastedeploy-tpl python3-asyncssh python3-cachetools 2026-03-10T13:53:09.146 INFO:teuthology.orchestra.run.vm07.stdout: python3-ceph-common python3-cheroot python3-cherrypy3 python3-google-auth 2026-03-10T13:53:09.146 INFO:teuthology.orchestra.run.vm07.stdout: python3-jaraco.classes python3-jaraco.collections python3-jaraco.functools 2026-03-10T13:53:09.146 INFO:teuthology.orchestra.run.vm07.stdout: python3-jaraco.text python3-joblib python3-kubernetes python3-logutils 2026-03-10T13:53:09.146 INFO:teuthology.orchestra.run.vm07.stdout: python3-mako python3-natsort python3-paste python3-pastedeploy 2026-03-10T13:53:09.146 INFO:teuthology.orchestra.run.vm07.stdout: python3-pastescript python3-pecan python3-portend python3-prettytable 2026-03-10T13:53:09.146 INFO:teuthology.orchestra.run.vm07.stdout: python3-psutil python3-pyinotify python3-repoze.lru 2026-03-10T13:53:09.146 INFO:teuthology.orchestra.run.vm07.stdout: python3-requests-oauthlib python3-routes python3-rsa python3-simplegeneric 2026-03-10T13:53:09.146 INFO:teuthology.orchestra.run.vm07.stdout: python3-simplejson python3-singledispatch python3-sklearn 2026-03-10T13:53:09.146 INFO:teuthology.orchestra.run.vm07.stdout: python3-sklearn-lib python3-tempita python3-tempora python3-threadpoolctl 2026-03-10T13:53:09.146 INFO:teuthology.orchestra.run.vm07.stdout: python3-waitress python3-wcwidth python3-webob python3-websocket 2026-03-10T13:53:09.146 INFO:teuthology.orchestra.run.vm07.stdout: python3-webtest python3-werkzeug python3-zc.lockfile sg3-utils 2026-03-10T13:53:09.146 INFO:teuthology.orchestra.run.vm07.stdout: sg3-utils-udev smartmontools socat xmlstarlet 2026-03-10T13:53:09.146 INFO:teuthology.orchestra.run.vm07.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-10T13:53:09.173 INFO:teuthology.orchestra.run.vm07.stdout:0 upgraded, 0 newly installed, 0 to remove and 12 not upgraded. 2026-03-10T13:53:09.173 INFO:teuthology.orchestra.run.vm07.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-10T13:53:09.210 INFO:teuthology.orchestra.run.vm07.stdout:Reading package lists... 2026-03-10T13:53:09.220 INFO:teuthology.orchestra.run.vm08.stdout:Building dependency tree... 2026-03-10T13:53:09.220 INFO:teuthology.orchestra.run.vm08.stdout:Reading state information... 2026-03-10T13:53:09.428 INFO:teuthology.orchestra.run.vm07.stdout:Building dependency tree... 2026-03-10T13:53:09.429 INFO:teuthology.orchestra.run.vm07.stdout:Reading state information... 2026-03-10T13:53:09.466 INFO:teuthology.orchestra.run.vm08.stdout:The following packages were automatically installed and are no longer required: 2026-03-10T13:53:09.466 INFO:teuthology.orchestra.run.vm08.stdout: ceph-mgr-modules-core jq kpartx libboost-iostreams1.74.0 2026-03-10T13:53:09.466 INFO:teuthology.orchestra.run.vm08.stdout: libboost-thread1.74.0 libjq1 liblua5.3-dev liboath0 libonig5 libpmemobj1 2026-03-10T13:53:09.466 INFO:teuthology.orchestra.run.vm08.stdout: libradosstriper1 librdkafka1 libreadline-dev librgw2 libsgutils2-2 2026-03-10T13:53:09.467 INFO:teuthology.orchestra.run.vm08.stdout: libsqlite3-mod-ceph lua-any lua-sec lua-socket lua5.1 luarocks nvme-cli 2026-03-10T13:53:09.467 INFO:teuthology.orchestra.run.vm08.stdout: pkg-config python-asyncssh-doc python-pastedeploy-tpl python3-asyncssh 2026-03-10T13:53:09.467 INFO:teuthology.orchestra.run.vm08.stdout: python3-cachetools python3-ceph-argparse python3-ceph-common python3-cheroot 2026-03-10T13:53:09.467 INFO:teuthology.orchestra.run.vm08.stdout: python3-cherrypy3 python3-google-auth python3-jaraco.classes 2026-03-10T13:53:09.467 INFO:teuthology.orchestra.run.vm08.stdout: python3-jaraco.collections python3-jaraco.functools python3-jaraco.text 2026-03-10T13:53:09.467 INFO:teuthology.orchestra.run.vm08.stdout: python3-joblib python3-kubernetes python3-logutils python3-mako 2026-03-10T13:53:09.467 INFO:teuthology.orchestra.run.vm08.stdout: python3-natsort python3-paste python3-pastedeploy python3-pastescript 2026-03-10T13:53:09.467 INFO:teuthology.orchestra.run.vm08.stdout: python3-pecan python3-portend python3-prettytable python3-psutil 2026-03-10T13:53:09.467 INFO:teuthology.orchestra.run.vm08.stdout: python3-pyinotify python3-repoze.lru python3-requests-oauthlib 2026-03-10T13:53:09.467 INFO:teuthology.orchestra.run.vm08.stdout: python3-routes python3-rsa python3-simplegeneric python3-simplejson 2026-03-10T13:53:09.468 INFO:teuthology.orchestra.run.vm08.stdout: python3-singledispatch python3-sklearn python3-sklearn-lib python3-tempita 2026-03-10T13:53:09.468 INFO:teuthology.orchestra.run.vm08.stdout: python3-tempora python3-threadpoolctl python3-waitress python3-wcwidth 2026-03-10T13:53:09.468 INFO:teuthology.orchestra.run.vm08.stdout: python3-webob python3-websocket python3-webtest python3-werkzeug 2026-03-10T13:53:09.468 INFO:teuthology.orchestra.run.vm08.stdout: python3-zc.lockfile sg3-utils sg3-utils-udev smartmontools socat unzip 2026-03-10T13:53:09.468 INFO:teuthology.orchestra.run.vm08.stdout: xmlstarlet zip 2026-03-10T13:53:09.468 INFO:teuthology.orchestra.run.vm08.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-10T13:53:09.487 INFO:teuthology.orchestra.run.vm08.stdout:The following packages will be REMOVED: 2026-03-10T13:53:09.487 INFO:teuthology.orchestra.run.vm08.stdout: python3-cephfs* python3-rados* python3-rgw* 2026-03-10T13:53:09.646 INFO:teuthology.orchestra.run.vm07.stdout:The following packages were automatically installed and are no longer required: 2026-03-10T13:53:09.646 INFO:teuthology.orchestra.run.vm07.stdout: ceph-mgr-modules-core jq kpartx libboost-iostreams1.74.0 2026-03-10T13:53:09.646 INFO:teuthology.orchestra.run.vm07.stdout: libboost-thread1.74.0 libjq1 liblua5.3-dev liboath0 libonig5 libpmemobj1 2026-03-10T13:53:09.646 INFO:teuthology.orchestra.run.vm07.stdout: libradosstriper1 librdkafka1 libreadline-dev librgw2 libsgutils2-2 2026-03-10T13:53:09.647 INFO:teuthology.orchestra.run.vm07.stdout: libsqlite3-mod-ceph lua-any lua-sec lua-socket lua5.1 luarocks nvme-cli 2026-03-10T13:53:09.647 INFO:teuthology.orchestra.run.vm07.stdout: pkg-config python-asyncssh-doc python-pastedeploy-tpl python3-asyncssh 2026-03-10T13:53:09.647 INFO:teuthology.orchestra.run.vm07.stdout: python3-cachetools python3-ceph-argparse python3-ceph-common python3-cheroot 2026-03-10T13:53:09.647 INFO:teuthology.orchestra.run.vm07.stdout: python3-cherrypy3 python3-google-auth python3-jaraco.classes 2026-03-10T13:53:09.647 INFO:teuthology.orchestra.run.vm07.stdout: python3-jaraco.collections python3-jaraco.functools python3-jaraco.text 2026-03-10T13:53:09.647 INFO:teuthology.orchestra.run.vm07.stdout: python3-joblib python3-kubernetes python3-logutils python3-mako 2026-03-10T13:53:09.647 INFO:teuthology.orchestra.run.vm07.stdout: python3-natsort python3-paste python3-pastedeploy python3-pastescript 2026-03-10T13:53:09.647 INFO:teuthology.orchestra.run.vm07.stdout: python3-pecan python3-portend python3-prettytable python3-psutil 2026-03-10T13:53:09.647 INFO:teuthology.orchestra.run.vm07.stdout: python3-pyinotify python3-repoze.lru python3-requests-oauthlib 2026-03-10T13:53:09.647 INFO:teuthology.orchestra.run.vm07.stdout: python3-routes python3-rsa python3-simplegeneric python3-simplejson 2026-03-10T13:53:09.647 INFO:teuthology.orchestra.run.vm07.stdout: python3-singledispatch python3-sklearn python3-sklearn-lib python3-tempita 2026-03-10T13:53:09.648 INFO:teuthology.orchestra.run.vm07.stdout: python3-tempora python3-threadpoolctl python3-waitress python3-wcwidth 2026-03-10T13:53:09.648 INFO:teuthology.orchestra.run.vm07.stdout: python3-webob python3-websocket python3-webtest python3-werkzeug 2026-03-10T13:53:09.648 INFO:teuthology.orchestra.run.vm07.stdout: python3-zc.lockfile sg3-utils sg3-utils-udev smartmontools socat unzip 2026-03-10T13:53:09.648 INFO:teuthology.orchestra.run.vm07.stdout: xmlstarlet zip 2026-03-10T13:53:09.648 INFO:teuthology.orchestra.run.vm07.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-10T13:53:09.664 INFO:teuthology.orchestra.run.vm07.stdout:The following packages will be REMOVED: 2026-03-10T13:53:09.665 INFO:teuthology.orchestra.run.vm07.stdout: python3-cephfs* python3-rados* python3-rgw* 2026-03-10T13:53:09.683 INFO:teuthology.orchestra.run.vm08.stdout:0 upgraded, 0 newly installed, 3 to remove and 12 not upgraded. 2026-03-10T13:53:09.683 INFO:teuthology.orchestra.run.vm08.stdout:After this operation, 2062 kB disk space will be freed. 2026-03-10T13:53:09.726 INFO:teuthology.orchestra.run.vm08.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 117434 files and directories currently installed.) 2026-03-10T13:53:09.728 INFO:teuthology.orchestra.run.vm08.stdout:Removing python3-cephfs (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T13:53:09.739 INFO:teuthology.orchestra.run.vm08.stdout:Removing python3-rgw (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T13:53:09.750 INFO:teuthology.orchestra.run.vm08.stdout:Removing python3-rados (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T13:53:09.865 INFO:teuthology.orchestra.run.vm07.stdout:0 upgraded, 0 newly installed, 3 to remove and 12 not upgraded. 2026-03-10T13:53:09.865 INFO:teuthology.orchestra.run.vm07.stdout:After this operation, 2062 kB disk space will be freed. 2026-03-10T13:53:09.912 INFO:teuthology.orchestra.run.vm07.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 117434 files and directories currently installed.) 2026-03-10T13:53:09.915 INFO:teuthology.orchestra.run.vm07.stdout:Removing python3-cephfs (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T13:53:09.927 INFO:teuthology.orchestra.run.vm07.stdout:Removing python3-rgw (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T13:53:09.943 INFO:teuthology.orchestra.run.vm07.stdout:Removing python3-rados (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T13:53:10.408 INFO:teuthology.orchestra.run.vm00.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-10T13:53:10.445 INFO:teuthology.orchestra.run.vm00.stdout:Reading package lists... 2026-03-10T13:53:10.680 INFO:teuthology.orchestra.run.vm00.stdout:Building dependency tree... 2026-03-10T13:53:10.680 INFO:teuthology.orchestra.run.vm00.stdout:Reading state information... 2026-03-10T13:53:10.888 INFO:teuthology.orchestra.run.vm00.stdout:Package 'python3-rgw' is not installed, so not removed 2026-03-10T13:53:10.888 INFO:teuthology.orchestra.run.vm00.stdout:The following packages were automatically installed and are no longer required: 2026-03-10T13:53:10.888 INFO:teuthology.orchestra.run.vm00.stdout: ceph-mgr-modules-core jq kpartx libboost-iostreams1.74.0 2026-03-10T13:53:10.888 INFO:teuthology.orchestra.run.vm00.stdout: libboost-thread1.74.0 libjq1 liblua5.3-dev liboath0 libonig5 libpmemobj1 2026-03-10T13:53:10.888 INFO:teuthology.orchestra.run.vm00.stdout: libradosstriper1 librdkafka1 libreadline-dev librgw2 libsgutils2-2 2026-03-10T13:53:10.889 INFO:teuthology.orchestra.run.vm00.stdout: libsqlite3-mod-ceph lua-any lua-sec lua-socket lua5.1 luarocks nvme-cli 2026-03-10T13:53:10.889 INFO:teuthology.orchestra.run.vm00.stdout: pkg-config python-asyncssh-doc python-pastedeploy-tpl python3-asyncssh 2026-03-10T13:53:10.889 INFO:teuthology.orchestra.run.vm00.stdout: python3-cachetools python3-ceph-argparse python3-ceph-common python3-cheroot 2026-03-10T13:53:10.889 INFO:teuthology.orchestra.run.vm00.stdout: python3-cherrypy3 python3-google-auth python3-jaraco.classes 2026-03-10T13:53:10.889 INFO:teuthology.orchestra.run.vm00.stdout: python3-jaraco.collections python3-jaraco.functools python3-jaraco.text 2026-03-10T13:53:10.889 INFO:teuthology.orchestra.run.vm00.stdout: python3-joblib python3-kubernetes python3-logutils python3-mako 2026-03-10T13:53:10.889 INFO:teuthology.orchestra.run.vm00.stdout: python3-natsort python3-paste python3-pastedeploy python3-pastescript 2026-03-10T13:53:10.889 INFO:teuthology.orchestra.run.vm00.stdout: python3-pecan python3-portend python3-prettytable python3-psutil 2026-03-10T13:53:10.889 INFO:teuthology.orchestra.run.vm00.stdout: python3-pyinotify python3-repoze.lru python3-requests-oauthlib 2026-03-10T13:53:10.889 INFO:teuthology.orchestra.run.vm00.stdout: python3-routes python3-rsa python3-simplegeneric python3-simplejson 2026-03-10T13:53:10.889 INFO:teuthology.orchestra.run.vm00.stdout: python3-singledispatch python3-sklearn python3-sklearn-lib python3-tempita 2026-03-10T13:53:10.889 INFO:teuthology.orchestra.run.vm00.stdout: python3-tempora python3-threadpoolctl python3-waitress python3-wcwidth 2026-03-10T13:53:10.889 INFO:teuthology.orchestra.run.vm00.stdout: python3-webob python3-websocket python3-webtest python3-werkzeug 2026-03-10T13:53:10.889 INFO:teuthology.orchestra.run.vm00.stdout: python3-zc.lockfile sg3-utils sg3-utils-udev smartmontools socat unzip 2026-03-10T13:53:10.889 INFO:teuthology.orchestra.run.vm00.stdout: xmlstarlet zip 2026-03-10T13:53:10.889 INFO:teuthology.orchestra.run.vm00.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-10T13:53:10.912 INFO:teuthology.orchestra.run.vm00.stdout:0 upgraded, 0 newly installed, 0 to remove and 12 not upgraded. 2026-03-10T13:53:10.912 INFO:teuthology.orchestra.run.vm00.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-10T13:53:10.945 INFO:teuthology.orchestra.run.vm00.stdout:Reading package lists... 2026-03-10T13:53:10.977 INFO:teuthology.orchestra.run.vm07.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-10T13:53:10.990 INFO:teuthology.orchestra.run.vm08.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-10T13:53:11.014 INFO:teuthology.orchestra.run.vm07.stdout:Reading package lists... 2026-03-10T13:53:11.026 INFO:teuthology.orchestra.run.vm08.stdout:Reading package lists... 2026-03-10T13:53:11.154 INFO:teuthology.orchestra.run.vm00.stdout:Building dependency tree... 2026-03-10T13:53:11.155 INFO:teuthology.orchestra.run.vm00.stdout:Reading state information... 2026-03-10T13:53:11.201 INFO:teuthology.orchestra.run.vm07.stdout:Building dependency tree... 2026-03-10T13:53:11.201 INFO:teuthology.orchestra.run.vm07.stdout:Reading state information... 2026-03-10T13:53:11.231 INFO:teuthology.orchestra.run.vm08.stdout:Building dependency tree... 2026-03-10T13:53:11.232 INFO:teuthology.orchestra.run.vm08.stdout:Reading state information... 2026-03-10T13:53:11.332 INFO:teuthology.orchestra.run.vm08.stdout:Package 'python3-rgw' is not installed, so not removed 2026-03-10T13:53:11.332 INFO:teuthology.orchestra.run.vm08.stdout:The following packages were automatically installed and are no longer required: 2026-03-10T13:53:11.332 INFO:teuthology.orchestra.run.vm08.stdout: ceph-mgr-modules-core jq kpartx libboost-iostreams1.74.0 2026-03-10T13:53:11.332 INFO:teuthology.orchestra.run.vm08.stdout: libboost-thread1.74.0 libjq1 liblua5.3-dev liboath0 libonig5 libpmemobj1 2026-03-10T13:53:11.332 INFO:teuthology.orchestra.run.vm08.stdout: libradosstriper1 librdkafka1 libreadline-dev librgw2 libsgutils2-2 2026-03-10T13:53:11.333 INFO:teuthology.orchestra.run.vm08.stdout: libsqlite3-mod-ceph lua-any lua-sec lua-socket lua5.1 luarocks nvme-cli 2026-03-10T13:53:11.333 INFO:teuthology.orchestra.run.vm08.stdout: pkg-config python-asyncssh-doc python-pastedeploy-tpl python3-asyncssh 2026-03-10T13:53:11.333 INFO:teuthology.orchestra.run.vm08.stdout: python3-cachetools python3-ceph-argparse python3-ceph-common python3-cheroot 2026-03-10T13:53:11.333 INFO:teuthology.orchestra.run.vm08.stdout: python3-cherrypy3 python3-google-auth python3-jaraco.classes 2026-03-10T13:53:11.333 INFO:teuthology.orchestra.run.vm08.stdout: python3-jaraco.collections python3-jaraco.functools python3-jaraco.text 2026-03-10T13:53:11.333 INFO:teuthology.orchestra.run.vm08.stdout: python3-joblib python3-kubernetes python3-logutils python3-mako 2026-03-10T13:53:11.333 INFO:teuthology.orchestra.run.vm08.stdout: python3-natsort python3-paste python3-pastedeploy python3-pastescript 2026-03-10T13:53:11.333 INFO:teuthology.orchestra.run.vm08.stdout: python3-pecan python3-portend python3-prettytable python3-psutil 2026-03-10T13:53:11.333 INFO:teuthology.orchestra.run.vm08.stdout: python3-pyinotify python3-repoze.lru python3-requests-oauthlib 2026-03-10T13:53:11.333 INFO:teuthology.orchestra.run.vm08.stdout: python3-routes python3-rsa python3-simplegeneric python3-simplejson 2026-03-10T13:53:11.333 INFO:teuthology.orchestra.run.vm08.stdout: python3-singledispatch python3-sklearn python3-sklearn-lib python3-tempita 2026-03-10T13:53:11.333 INFO:teuthology.orchestra.run.vm08.stdout: python3-tempora python3-threadpoolctl python3-waitress python3-wcwidth 2026-03-10T13:53:11.333 INFO:teuthology.orchestra.run.vm08.stdout: python3-webob python3-websocket python3-webtest python3-werkzeug 2026-03-10T13:53:11.333 INFO:teuthology.orchestra.run.vm08.stdout: python3-zc.lockfile sg3-utils sg3-utils-udev smartmontools socat unzip 2026-03-10T13:53:11.333 INFO:teuthology.orchestra.run.vm08.stdout: xmlstarlet zip 2026-03-10T13:53:11.333 INFO:teuthology.orchestra.run.vm08.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-10T13:53:11.347 INFO:teuthology.orchestra.run.vm08.stdout:0 upgraded, 0 newly installed, 0 to remove and 12 not upgraded. 2026-03-10T13:53:11.347 INFO:teuthology.orchestra.run.vm08.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-10T13:53:11.379 INFO:teuthology.orchestra.run.vm08.stdout:Reading package lists... 2026-03-10T13:53:11.406 INFO:teuthology.orchestra.run.vm00.stdout:Package 'python3-cephfs' is not installed, so not removed 2026-03-10T13:53:11.406 INFO:teuthology.orchestra.run.vm00.stdout:The following packages were automatically installed and are no longer required: 2026-03-10T13:53:11.406 INFO:teuthology.orchestra.run.vm00.stdout: ceph-mgr-modules-core jq kpartx libboost-iostreams1.74.0 2026-03-10T13:53:11.406 INFO:teuthology.orchestra.run.vm00.stdout: libboost-thread1.74.0 libjq1 liblua5.3-dev liboath0 libonig5 libpmemobj1 2026-03-10T13:53:11.406 INFO:teuthology.orchestra.run.vm00.stdout: libradosstriper1 librdkafka1 libreadline-dev librgw2 libsgutils2-2 2026-03-10T13:53:11.406 INFO:teuthology.orchestra.run.vm00.stdout: libsqlite3-mod-ceph lua-any lua-sec lua-socket lua5.1 luarocks nvme-cli 2026-03-10T13:53:11.406 INFO:teuthology.orchestra.run.vm00.stdout: pkg-config python-asyncssh-doc python-pastedeploy-tpl python3-asyncssh 2026-03-10T13:53:11.406 INFO:teuthology.orchestra.run.vm00.stdout: python3-cachetools python3-ceph-argparse python3-ceph-common python3-cheroot 2026-03-10T13:53:11.407 INFO:teuthology.orchestra.run.vm00.stdout: python3-cherrypy3 python3-google-auth python3-jaraco.classes 2026-03-10T13:53:11.407 INFO:teuthology.orchestra.run.vm00.stdout: python3-jaraco.collections python3-jaraco.functools python3-jaraco.text 2026-03-10T13:53:11.407 INFO:teuthology.orchestra.run.vm00.stdout: python3-joblib python3-kubernetes python3-logutils python3-mako 2026-03-10T13:53:11.407 INFO:teuthology.orchestra.run.vm00.stdout: python3-natsort python3-paste python3-pastedeploy python3-pastescript 2026-03-10T13:53:11.407 INFO:teuthology.orchestra.run.vm00.stdout: python3-pecan python3-portend python3-prettytable python3-psutil 2026-03-10T13:53:11.407 INFO:teuthology.orchestra.run.vm00.stdout: python3-pyinotify python3-repoze.lru python3-requests-oauthlib 2026-03-10T13:53:11.407 INFO:teuthology.orchestra.run.vm00.stdout: python3-routes python3-rsa python3-simplegeneric python3-simplejson 2026-03-10T13:53:11.407 INFO:teuthology.orchestra.run.vm00.stdout: python3-singledispatch python3-sklearn python3-sklearn-lib python3-tempita 2026-03-10T13:53:11.407 INFO:teuthology.orchestra.run.vm00.stdout: python3-tempora python3-threadpoolctl python3-waitress python3-wcwidth 2026-03-10T13:53:11.407 INFO:teuthology.orchestra.run.vm00.stdout: python3-webob python3-websocket python3-webtest python3-werkzeug 2026-03-10T13:53:11.407 INFO:teuthology.orchestra.run.vm00.stdout: python3-zc.lockfile sg3-utils sg3-utils-udev smartmontools socat unzip 2026-03-10T13:53:11.407 INFO:teuthology.orchestra.run.vm00.stdout: xmlstarlet zip 2026-03-10T13:53:11.407 INFO:teuthology.orchestra.run.vm00.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-10T13:53:11.432 INFO:teuthology.orchestra.run.vm07.stdout:Package 'python3-rgw' is not installed, so not removed 2026-03-10T13:53:11.432 INFO:teuthology.orchestra.run.vm07.stdout:The following packages were automatically installed and are no longer required: 2026-03-10T13:53:11.432 INFO:teuthology.orchestra.run.vm07.stdout: ceph-mgr-modules-core jq kpartx libboost-iostreams1.74.0 2026-03-10T13:53:11.433 INFO:teuthology.orchestra.run.vm07.stdout: libboost-thread1.74.0 libjq1 liblua5.3-dev liboath0 libonig5 libpmemobj1 2026-03-10T13:53:11.433 INFO:teuthology.orchestra.run.vm07.stdout: libradosstriper1 librdkafka1 libreadline-dev librgw2 libsgutils2-2 2026-03-10T13:53:11.433 INFO:teuthology.orchestra.run.vm07.stdout: libsqlite3-mod-ceph lua-any lua-sec lua-socket lua5.1 luarocks nvme-cli 2026-03-10T13:53:11.434 INFO:teuthology.orchestra.run.vm07.stdout: pkg-config python-asyncssh-doc python-pastedeploy-tpl python3-asyncssh 2026-03-10T13:53:11.434 INFO:teuthology.orchestra.run.vm07.stdout: python3-cachetools python3-ceph-argparse python3-ceph-common python3-cheroot 2026-03-10T13:53:11.434 INFO:teuthology.orchestra.run.vm07.stdout: python3-cherrypy3 python3-google-auth python3-jaraco.classes 2026-03-10T13:53:11.434 INFO:teuthology.orchestra.run.vm07.stdout: python3-jaraco.collections python3-jaraco.functools python3-jaraco.text 2026-03-10T13:53:11.434 INFO:teuthology.orchestra.run.vm07.stdout: python3-joblib python3-kubernetes python3-logutils python3-mako 2026-03-10T13:53:11.434 INFO:teuthology.orchestra.run.vm07.stdout: python3-natsort python3-paste python3-pastedeploy python3-pastescript 2026-03-10T13:53:11.434 INFO:teuthology.orchestra.run.vm07.stdout: python3-pecan python3-portend python3-prettytable python3-psutil 2026-03-10T13:53:11.434 INFO:teuthology.orchestra.run.vm07.stdout: python3-pyinotify python3-repoze.lru python3-requests-oauthlib 2026-03-10T13:53:11.434 INFO:teuthology.orchestra.run.vm07.stdout: python3-routes python3-rsa python3-simplegeneric python3-simplejson 2026-03-10T13:53:11.434 INFO:teuthology.orchestra.run.vm07.stdout: python3-singledispatch python3-sklearn python3-sklearn-lib python3-tempita 2026-03-10T13:53:11.434 INFO:teuthology.orchestra.run.vm07.stdout: python3-tempora python3-threadpoolctl python3-waitress python3-wcwidth 2026-03-10T13:53:11.437 INFO:teuthology.orchestra.run.vm07.stdout: python3-webob python3-websocket python3-webtest python3-werkzeug 2026-03-10T13:53:11.437 INFO:teuthology.orchestra.run.vm00.stdout:0 upgraded, 0 newly installed, 0 to remove and 12 not upgraded. 2026-03-10T13:53:11.437 INFO:teuthology.orchestra.run.vm00.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-10T13:53:11.437 INFO:teuthology.orchestra.run.vm07.stdout: python3-zc.lockfile sg3-utils sg3-utils-udev smartmontools socat unzip 2026-03-10T13:53:11.437 INFO:teuthology.orchestra.run.vm07.stdout: xmlstarlet zip 2026-03-10T13:53:11.437 INFO:teuthology.orchestra.run.vm07.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-10T13:53:11.456 INFO:teuthology.orchestra.run.vm07.stdout:0 upgraded, 0 newly installed, 0 to remove and 12 not upgraded. 2026-03-10T13:53:11.456 INFO:teuthology.orchestra.run.vm07.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-10T13:53:11.471 INFO:teuthology.orchestra.run.vm00.stdout:Reading package lists... 2026-03-10T13:53:11.491 INFO:teuthology.orchestra.run.vm07.stdout:Reading package lists... 2026-03-10T13:53:11.538 INFO:teuthology.orchestra.run.vm08.stdout:Building dependency tree... 2026-03-10T13:53:11.539 INFO:teuthology.orchestra.run.vm08.stdout:Reading state information... 2026-03-10T13:53:11.643 INFO:teuthology.orchestra.run.vm08.stdout:Package 'python3-cephfs' is not installed, so not removed 2026-03-10T13:53:11.643 INFO:teuthology.orchestra.run.vm08.stdout:The following packages were automatically installed and are no longer required: 2026-03-10T13:53:11.643 INFO:teuthology.orchestra.run.vm08.stdout: ceph-mgr-modules-core jq kpartx libboost-iostreams1.74.0 2026-03-10T13:53:11.643 INFO:teuthology.orchestra.run.vm08.stdout: libboost-thread1.74.0 libjq1 liblua5.3-dev liboath0 libonig5 libpmemobj1 2026-03-10T13:53:11.643 INFO:teuthology.orchestra.run.vm08.stdout: libradosstriper1 librdkafka1 libreadline-dev librgw2 libsgutils2-2 2026-03-10T13:53:11.644 INFO:teuthology.orchestra.run.vm08.stdout: libsqlite3-mod-ceph lua-any lua-sec lua-socket lua5.1 luarocks nvme-cli 2026-03-10T13:53:11.644 INFO:teuthology.orchestra.run.vm08.stdout: pkg-config python-asyncssh-doc python-pastedeploy-tpl python3-asyncssh 2026-03-10T13:53:11.644 INFO:teuthology.orchestra.run.vm08.stdout: python3-cachetools python3-ceph-argparse python3-ceph-common python3-cheroot 2026-03-10T13:53:11.644 INFO:teuthology.orchestra.run.vm08.stdout: python3-cherrypy3 python3-google-auth python3-jaraco.classes 2026-03-10T13:53:11.644 INFO:teuthology.orchestra.run.vm08.stdout: python3-jaraco.collections python3-jaraco.functools python3-jaraco.text 2026-03-10T13:53:11.644 INFO:teuthology.orchestra.run.vm08.stdout: python3-joblib python3-kubernetes python3-logutils python3-mako 2026-03-10T13:53:11.644 INFO:teuthology.orchestra.run.vm08.stdout: python3-natsort python3-paste python3-pastedeploy python3-pastescript 2026-03-10T13:53:11.644 INFO:teuthology.orchestra.run.vm08.stdout: python3-pecan python3-portend python3-prettytable python3-psutil 2026-03-10T13:53:11.644 INFO:teuthology.orchestra.run.vm08.stdout: python3-pyinotify python3-repoze.lru python3-requests-oauthlib 2026-03-10T13:53:11.644 INFO:teuthology.orchestra.run.vm08.stdout: python3-routes python3-rsa python3-simplegeneric python3-simplejson 2026-03-10T13:53:11.644 INFO:teuthology.orchestra.run.vm08.stdout: python3-singledispatch python3-sklearn python3-sklearn-lib python3-tempita 2026-03-10T13:53:11.644 INFO:teuthology.orchestra.run.vm08.stdout: python3-tempora python3-threadpoolctl python3-waitress python3-wcwidth 2026-03-10T13:53:11.644 INFO:teuthology.orchestra.run.vm08.stdout: python3-webob python3-websocket python3-webtest python3-werkzeug 2026-03-10T13:53:11.644 INFO:teuthology.orchestra.run.vm08.stdout: python3-zc.lockfile sg3-utils sg3-utils-udev smartmontools socat unzip 2026-03-10T13:53:11.644 INFO:teuthology.orchestra.run.vm08.stdout: xmlstarlet zip 2026-03-10T13:53:11.644 INFO:teuthology.orchestra.run.vm08.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-10T13:53:11.653 INFO:teuthology.orchestra.run.vm00.stdout:Building dependency tree... 2026-03-10T13:53:11.654 INFO:teuthology.orchestra.run.vm00.stdout:Reading state information... 2026-03-10T13:53:11.657 INFO:teuthology.orchestra.run.vm08.stdout:0 upgraded, 0 newly installed, 0 to remove and 12 not upgraded. 2026-03-10T13:53:11.657 INFO:teuthology.orchestra.run.vm08.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-10T13:53:11.680 INFO:teuthology.orchestra.run.vm07.stdout:Building dependency tree... 2026-03-10T13:53:11.681 INFO:teuthology.orchestra.run.vm07.stdout:Reading state information... 2026-03-10T13:53:11.691 INFO:teuthology.orchestra.run.vm08.stdout:Reading package lists... 2026-03-10T13:53:11.823 INFO:teuthology.orchestra.run.vm07.stdout:Package 'python3-cephfs' is not installed, so not removed 2026-03-10T13:53:11.823 INFO:teuthology.orchestra.run.vm07.stdout:The following packages were automatically installed and are no longer required: 2026-03-10T13:53:11.823 INFO:teuthology.orchestra.run.vm07.stdout: ceph-mgr-modules-core jq kpartx libboost-iostreams1.74.0 2026-03-10T13:53:11.823 INFO:teuthology.orchestra.run.vm07.stdout: libboost-thread1.74.0 libjq1 liblua5.3-dev liboath0 libonig5 libpmemobj1 2026-03-10T13:53:11.823 INFO:teuthology.orchestra.run.vm07.stdout: libradosstriper1 librdkafka1 libreadline-dev librgw2 libsgutils2-2 2026-03-10T13:53:11.823 INFO:teuthology.orchestra.run.vm07.stdout: libsqlite3-mod-ceph lua-any lua-sec lua-socket lua5.1 luarocks nvme-cli 2026-03-10T13:53:11.823 INFO:teuthology.orchestra.run.vm07.stdout: pkg-config python-asyncssh-doc python-pastedeploy-tpl python3-asyncssh 2026-03-10T13:53:11.823 INFO:teuthology.orchestra.run.vm07.stdout: python3-cachetools python3-ceph-argparse python3-ceph-common python3-cheroot 2026-03-10T13:53:11.823 INFO:teuthology.orchestra.run.vm07.stdout: python3-cherrypy3 python3-google-auth python3-jaraco.classes 2026-03-10T13:53:11.823 INFO:teuthology.orchestra.run.vm07.stdout: python3-jaraco.collections python3-jaraco.functools python3-jaraco.text 2026-03-10T13:53:11.823 INFO:teuthology.orchestra.run.vm07.stdout: python3-joblib python3-kubernetes python3-logutils python3-mako 2026-03-10T13:53:11.823 INFO:teuthology.orchestra.run.vm07.stdout: python3-natsort python3-paste python3-pastedeploy python3-pastescript 2026-03-10T13:53:11.823 INFO:teuthology.orchestra.run.vm07.stdout: python3-pecan python3-portend python3-prettytable python3-psutil 2026-03-10T13:53:11.823 INFO:teuthology.orchestra.run.vm07.stdout: python3-pyinotify python3-repoze.lru python3-requests-oauthlib 2026-03-10T13:53:11.823 INFO:teuthology.orchestra.run.vm07.stdout: python3-routes python3-rsa python3-simplegeneric python3-simplejson 2026-03-10T13:53:11.823 INFO:teuthology.orchestra.run.vm07.stdout: python3-singledispatch python3-sklearn python3-sklearn-lib python3-tempita 2026-03-10T13:53:11.823 INFO:teuthology.orchestra.run.vm07.stdout: python3-tempora python3-threadpoolctl python3-waitress python3-wcwidth 2026-03-10T13:53:11.823 INFO:teuthology.orchestra.run.vm07.stdout: python3-webob python3-websocket python3-webtest python3-werkzeug 2026-03-10T13:53:11.823 INFO:teuthology.orchestra.run.vm07.stdout: python3-zc.lockfile sg3-utils sg3-utils-udev smartmontools socat unzip 2026-03-10T13:53:11.823 INFO:teuthology.orchestra.run.vm07.stdout: xmlstarlet zip 2026-03-10T13:53:11.823 INFO:teuthology.orchestra.run.vm07.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-10T13:53:11.824 INFO:teuthology.orchestra.run.vm00.stdout:The following packages were automatically installed and are no longer required: 2026-03-10T13:53:11.825 INFO:teuthology.orchestra.run.vm00.stdout: ceph-mgr-modules-core jq kpartx libboost-iostreams1.74.0 2026-03-10T13:53:11.825 INFO:teuthology.orchestra.run.vm00.stdout: libboost-thread1.74.0 libjq1 liblua5.3-dev liboath0 libonig5 libpmemobj1 2026-03-10T13:53:11.825 INFO:teuthology.orchestra.run.vm00.stdout: libradosstriper1 librdkafka1 libreadline-dev librgw2 libsgutils2-2 2026-03-10T13:53:11.825 INFO:teuthology.orchestra.run.vm00.stdout: libsqlite3-mod-ceph lua-any lua-sec lua-socket lua5.1 luarocks nvme-cli 2026-03-10T13:53:11.825 INFO:teuthology.orchestra.run.vm00.stdout: pkg-config python-asyncssh-doc python-pastedeploy-tpl python3-asyncssh 2026-03-10T13:53:11.825 INFO:teuthology.orchestra.run.vm00.stdout: python3-cachetools python3-ceph-argparse python3-ceph-common python3-cheroot 2026-03-10T13:53:11.825 INFO:teuthology.orchestra.run.vm00.stdout: python3-cherrypy3 python3-google-auth python3-jaraco.classes 2026-03-10T13:53:11.825 INFO:teuthology.orchestra.run.vm00.stdout: python3-jaraco.collections python3-jaraco.functools python3-jaraco.text 2026-03-10T13:53:11.825 INFO:teuthology.orchestra.run.vm00.stdout: python3-joblib python3-kubernetes python3-logutils python3-mako 2026-03-10T13:53:11.825 INFO:teuthology.orchestra.run.vm00.stdout: python3-natsort python3-paste python3-pastedeploy python3-pastescript 2026-03-10T13:53:11.825 INFO:teuthology.orchestra.run.vm00.stdout: python3-pecan python3-portend python3-prettytable python3-psutil 2026-03-10T13:53:11.825 INFO:teuthology.orchestra.run.vm00.stdout: python3-pyinotify python3-repoze.lru python3-requests-oauthlib 2026-03-10T13:53:11.825 INFO:teuthology.orchestra.run.vm00.stdout: python3-routes python3-rsa python3-simplegeneric python3-simplejson 2026-03-10T13:53:11.825 INFO:teuthology.orchestra.run.vm00.stdout: python3-singledispatch python3-sklearn python3-sklearn-lib python3-tempita 2026-03-10T13:53:11.825 INFO:teuthology.orchestra.run.vm00.stdout: python3-tempora python3-threadpoolctl python3-waitress python3-wcwidth 2026-03-10T13:53:11.825 INFO:teuthology.orchestra.run.vm00.stdout: python3-webob python3-websocket python3-webtest python3-werkzeug 2026-03-10T13:53:11.825 INFO:teuthology.orchestra.run.vm00.stdout: python3-zc.lockfile sg3-utils sg3-utils-udev smartmontools socat unzip 2026-03-10T13:53:11.825 INFO:teuthology.orchestra.run.vm00.stdout: xmlstarlet zip 2026-03-10T13:53:11.825 INFO:teuthology.orchestra.run.vm00.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-10T13:53:11.835 INFO:teuthology.orchestra.run.vm00.stdout:The following packages will be REMOVED: 2026-03-10T13:53:11.835 INFO:teuthology.orchestra.run.vm00.stdout: python3-rbd* 2026-03-10T13:53:11.837 INFO:teuthology.orchestra.run.vm07.stdout:0 upgraded, 0 newly installed, 0 to remove and 12 not upgraded. 2026-03-10T13:53:11.838 INFO:teuthology.orchestra.run.vm07.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-10T13:53:11.858 INFO:teuthology.orchestra.run.vm08.stdout:Building dependency tree... 2026-03-10T13:53:11.859 INFO:teuthology.orchestra.run.vm08.stdout:Reading state information... 2026-03-10T13:53:11.876 INFO:teuthology.orchestra.run.vm07.stdout:Reading package lists... 2026-03-10T13:53:12.019 INFO:teuthology.orchestra.run.vm00.stdout:0 upgraded, 0 newly installed, 1 to remove and 12 not upgraded. 2026-03-10T13:53:12.020 INFO:teuthology.orchestra.run.vm00.stdout:After this operation, 1186 kB disk space will be freed. 2026-03-10T13:53:12.039 INFO:teuthology.orchestra.run.vm08.stdout:The following packages were automatically installed and are no longer required: 2026-03-10T13:53:12.039 INFO:teuthology.orchestra.run.vm08.stdout: ceph-mgr-modules-core jq kpartx libboost-iostreams1.74.0 2026-03-10T13:53:12.040 INFO:teuthology.orchestra.run.vm08.stdout: libboost-thread1.74.0 libjq1 liblua5.3-dev liboath0 libonig5 libpmemobj1 2026-03-10T13:53:12.040 INFO:teuthology.orchestra.run.vm08.stdout: libradosstriper1 librdkafka1 libreadline-dev librgw2 libsgutils2-2 2026-03-10T13:53:12.041 INFO:teuthology.orchestra.run.vm08.stdout: libsqlite3-mod-ceph lua-any lua-sec lua-socket lua5.1 luarocks nvme-cli 2026-03-10T13:53:12.041 INFO:teuthology.orchestra.run.vm08.stdout: pkg-config python-asyncssh-doc python-pastedeploy-tpl python3-asyncssh 2026-03-10T13:53:12.041 INFO:teuthology.orchestra.run.vm08.stdout: python3-cachetools python3-ceph-argparse python3-ceph-common python3-cheroot 2026-03-10T13:53:12.041 INFO:teuthology.orchestra.run.vm08.stdout: python3-cherrypy3 python3-google-auth python3-jaraco.classes 2026-03-10T13:53:12.041 INFO:teuthology.orchestra.run.vm08.stdout: python3-jaraco.collections python3-jaraco.functools python3-jaraco.text 2026-03-10T13:53:12.041 INFO:teuthology.orchestra.run.vm08.stdout: python3-joblib python3-kubernetes python3-logutils python3-mako 2026-03-10T13:53:12.041 INFO:teuthology.orchestra.run.vm08.stdout: python3-natsort python3-paste python3-pastedeploy python3-pastescript 2026-03-10T13:53:12.041 INFO:teuthology.orchestra.run.vm08.stdout: python3-pecan python3-portend python3-prettytable python3-psutil 2026-03-10T13:53:12.041 INFO:teuthology.orchestra.run.vm08.stdout: python3-pyinotify python3-repoze.lru python3-requests-oauthlib 2026-03-10T13:53:12.041 INFO:teuthology.orchestra.run.vm08.stdout: python3-routes python3-rsa python3-simplegeneric python3-simplejson 2026-03-10T13:53:12.041 INFO:teuthology.orchestra.run.vm08.stdout: python3-singledispatch python3-sklearn python3-sklearn-lib python3-tempita 2026-03-10T13:53:12.041 INFO:teuthology.orchestra.run.vm08.stdout: python3-tempora python3-threadpoolctl python3-waitress python3-wcwidth 2026-03-10T13:53:12.041 INFO:teuthology.orchestra.run.vm08.stdout: python3-webob python3-websocket python3-webtest python3-werkzeug 2026-03-10T13:53:12.041 INFO:teuthology.orchestra.run.vm08.stdout: python3-zc.lockfile sg3-utils sg3-utils-udev smartmontools socat unzip 2026-03-10T13:53:12.041 INFO:teuthology.orchestra.run.vm08.stdout: xmlstarlet zip 2026-03-10T13:53:12.041 INFO:teuthology.orchestra.run.vm08.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-10T13:53:12.062 INFO:teuthology.orchestra.run.vm08.stdout:The following packages will be REMOVED: 2026-03-10T13:53:12.062 INFO:teuthology.orchestra.run.vm08.stdout: python3-rbd* 2026-03-10T13:53:12.075 INFO:teuthology.orchestra.run.vm00.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 117410 files and directories currently installed.) 2026-03-10T13:53:12.077 INFO:teuthology.orchestra.run.vm00.stdout:Removing python3-rbd (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T13:53:12.079 INFO:teuthology.orchestra.run.vm07.stdout:Building dependency tree... 2026-03-10T13:53:12.080 INFO:teuthology.orchestra.run.vm07.stdout:Reading state information... 2026-03-10T13:53:12.249 INFO:teuthology.orchestra.run.vm07.stdout:The following packages were automatically installed and are no longer required: 2026-03-10T13:53:12.249 INFO:teuthology.orchestra.run.vm07.stdout: ceph-mgr-modules-core jq kpartx libboost-iostreams1.74.0 2026-03-10T13:53:12.249 INFO:teuthology.orchestra.run.vm07.stdout: libboost-thread1.74.0 libjq1 liblua5.3-dev liboath0 libonig5 libpmemobj1 2026-03-10T13:53:12.249 INFO:teuthology.orchestra.run.vm07.stdout: libradosstriper1 librdkafka1 libreadline-dev librgw2 libsgutils2-2 2026-03-10T13:53:12.249 INFO:teuthology.orchestra.run.vm07.stdout: libsqlite3-mod-ceph lua-any lua-sec lua-socket lua5.1 luarocks nvme-cli 2026-03-10T13:53:12.250 INFO:teuthology.orchestra.run.vm07.stdout: pkg-config python-asyncssh-doc python-pastedeploy-tpl python3-asyncssh 2026-03-10T13:53:12.250 INFO:teuthology.orchestra.run.vm07.stdout: python3-cachetools python3-ceph-argparse python3-ceph-common python3-cheroot 2026-03-10T13:53:12.250 INFO:teuthology.orchestra.run.vm07.stdout: python3-cherrypy3 python3-google-auth python3-jaraco.classes 2026-03-10T13:53:12.250 INFO:teuthology.orchestra.run.vm07.stdout: python3-jaraco.collections python3-jaraco.functools python3-jaraco.text 2026-03-10T13:53:12.250 INFO:teuthology.orchestra.run.vm07.stdout: python3-joblib python3-kubernetes python3-logutils python3-mako 2026-03-10T13:53:12.250 INFO:teuthology.orchestra.run.vm07.stdout: python3-natsort python3-paste python3-pastedeploy python3-pastescript 2026-03-10T13:53:12.250 INFO:teuthology.orchestra.run.vm07.stdout: python3-pecan python3-portend python3-prettytable python3-psutil 2026-03-10T13:53:12.250 INFO:teuthology.orchestra.run.vm07.stdout: python3-pyinotify python3-repoze.lru python3-requests-oauthlib 2026-03-10T13:53:12.250 INFO:teuthology.orchestra.run.vm07.stdout: python3-routes python3-rsa python3-simplegeneric python3-simplejson 2026-03-10T13:53:12.250 INFO:teuthology.orchestra.run.vm07.stdout: python3-singledispatch python3-sklearn python3-sklearn-lib python3-tempita 2026-03-10T13:53:12.250 INFO:teuthology.orchestra.run.vm07.stdout: python3-tempora python3-threadpoolctl python3-waitress python3-wcwidth 2026-03-10T13:53:12.250 INFO:teuthology.orchestra.run.vm07.stdout: python3-webob python3-websocket python3-webtest python3-werkzeug 2026-03-10T13:53:12.250 INFO:teuthology.orchestra.run.vm07.stdout: python3-zc.lockfile sg3-utils sg3-utils-udev smartmontools socat unzip 2026-03-10T13:53:12.250 INFO:teuthology.orchestra.run.vm07.stdout: xmlstarlet zip 2026-03-10T13:53:12.250 INFO:teuthology.orchestra.run.vm07.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-10T13:53:12.257 INFO:teuthology.orchestra.run.vm07.stdout:The following packages will be REMOVED: 2026-03-10T13:53:12.257 INFO:teuthology.orchestra.run.vm07.stdout: python3-rbd* 2026-03-10T13:53:12.332 INFO:teuthology.orchestra.run.vm08.stdout:0 upgraded, 0 newly installed, 1 to remove and 12 not upgraded. 2026-03-10T13:53:12.332 INFO:teuthology.orchestra.run.vm08.stdout:After this operation, 1186 kB disk space will be freed. 2026-03-10T13:53:12.376 INFO:teuthology.orchestra.run.vm08.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 117410 files and directories currently installed.) 2026-03-10T13:53:12.379 INFO:teuthology.orchestra.run.vm08.stdout:Removing python3-rbd (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T13:53:12.418 INFO:teuthology.orchestra.run.vm07.stdout:0 upgraded, 0 newly installed, 1 to remove and 12 not upgraded. 2026-03-10T13:53:12.419 INFO:teuthology.orchestra.run.vm07.stdout:After this operation, 1186 kB disk space will be freed. 2026-03-10T13:53:12.451 INFO:teuthology.orchestra.run.vm07.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 117410 files and directories currently installed.) 2026-03-10T13:53:12.453 INFO:teuthology.orchestra.run.vm07.stdout:Removing python3-rbd (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T13:53:13.133 INFO:teuthology.orchestra.run.vm00.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-10T13:53:13.173 INFO:teuthology.orchestra.run.vm00.stdout:Reading package lists... 2026-03-10T13:53:13.394 INFO:teuthology.orchestra.run.vm00.stdout:Building dependency tree... 2026-03-10T13:53:13.394 INFO:teuthology.orchestra.run.vm00.stdout:Reading state information... 2026-03-10T13:53:13.574 INFO:teuthology.orchestra.run.vm08.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-10T13:53:13.587 INFO:teuthology.orchestra.run.vm00.stdout:The following packages were automatically installed and are no longer required: 2026-03-10T13:53:13.587 INFO:teuthology.orchestra.run.vm00.stdout: ceph-mgr-modules-core jq kpartx libboost-iostreams1.74.0 2026-03-10T13:53:13.587 INFO:teuthology.orchestra.run.vm00.stdout: libboost-thread1.74.0 libjq1 liblua5.3-dev liboath0 libonig5 libpmemobj1 2026-03-10T13:53:13.587 INFO:teuthology.orchestra.run.vm00.stdout: libradosstriper1 librdkafka1 libreadline-dev librgw2 libsgutils2-2 2026-03-10T13:53:13.588 INFO:teuthology.orchestra.run.vm00.stdout: libsqlite3-mod-ceph lua-any lua-sec lua-socket lua5.1 luarocks nvme-cli 2026-03-10T13:53:13.588 INFO:teuthology.orchestra.run.vm00.stdout: pkg-config python-asyncssh-doc python-pastedeploy-tpl python3-asyncssh 2026-03-10T13:53:13.588 INFO:teuthology.orchestra.run.vm00.stdout: python3-cachetools python3-ceph-argparse python3-ceph-common python3-cheroot 2026-03-10T13:53:13.588 INFO:teuthology.orchestra.run.vm00.stdout: python3-cherrypy3 python3-google-auth python3-jaraco.classes 2026-03-10T13:53:13.588 INFO:teuthology.orchestra.run.vm00.stdout: python3-jaraco.collections python3-jaraco.functools python3-jaraco.text 2026-03-10T13:53:13.588 INFO:teuthology.orchestra.run.vm00.stdout: python3-joblib python3-kubernetes python3-logutils python3-mako 2026-03-10T13:53:13.588 INFO:teuthology.orchestra.run.vm00.stdout: python3-natsort python3-paste python3-pastedeploy python3-pastescript 2026-03-10T13:53:13.588 INFO:teuthology.orchestra.run.vm00.stdout: python3-pecan python3-portend python3-prettytable python3-psutil 2026-03-10T13:53:13.588 INFO:teuthology.orchestra.run.vm00.stdout: python3-pyinotify python3-repoze.lru python3-requests-oauthlib 2026-03-10T13:53:13.588 INFO:teuthology.orchestra.run.vm00.stdout: python3-routes python3-rsa python3-simplegeneric python3-simplejson 2026-03-10T13:53:13.588 INFO:teuthology.orchestra.run.vm00.stdout: python3-singledispatch python3-sklearn python3-sklearn-lib python3-tempita 2026-03-10T13:53:13.588 INFO:teuthology.orchestra.run.vm00.stdout: python3-tempora python3-threadpoolctl python3-waitress python3-wcwidth 2026-03-10T13:53:13.588 INFO:teuthology.orchestra.run.vm00.stdout: python3-webob python3-websocket python3-webtest python3-werkzeug 2026-03-10T13:53:13.588 INFO:teuthology.orchestra.run.vm00.stdout: python3-zc.lockfile sg3-utils sg3-utils-udev smartmontools socat unzip 2026-03-10T13:53:13.588 INFO:teuthology.orchestra.run.vm00.stdout: xmlstarlet zip 2026-03-10T13:53:13.588 INFO:teuthology.orchestra.run.vm00.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-10T13:53:13.599 INFO:teuthology.orchestra.run.vm00.stdout:The following packages will be REMOVED: 2026-03-10T13:53:13.600 INFO:teuthology.orchestra.run.vm00.stdout: libcephfs-dev* libcephfs2* 2026-03-10T13:53:13.607 INFO:teuthology.orchestra.run.vm08.stdout:Reading package lists... 2026-03-10T13:53:13.663 INFO:teuthology.orchestra.run.vm07.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-10T13:53:13.701 INFO:teuthology.orchestra.run.vm07.stdout:Reading package lists... 2026-03-10T13:53:13.788 INFO:teuthology.orchestra.run.vm00.stdout:0 upgraded, 0 newly installed, 2 to remove and 12 not upgraded. 2026-03-10T13:53:13.788 INFO:teuthology.orchestra.run.vm00.stdout:After this operation, 3202 kB disk space will be freed. 2026-03-10T13:53:13.817 INFO:teuthology.orchestra.run.vm08.stdout:Building dependency tree... 2026-03-10T13:53:13.818 INFO:teuthology.orchestra.run.vm08.stdout:Reading state information... 2026-03-10T13:53:13.830 INFO:teuthology.orchestra.run.vm07.stdout:Building dependency tree... 2026-03-10T13:53:13.831 INFO:teuthology.orchestra.run.vm07.stdout:Reading state information... 2026-03-10T13:53:13.834 INFO:teuthology.orchestra.run.vm00.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 117402 files and directories currently installed.) 2026-03-10T13:53:13.836 INFO:teuthology.orchestra.run.vm00.stdout:Removing libcephfs-dev (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T13:53:13.848 INFO:teuthology.orchestra.run.vm00.stdout:Removing libcephfs2 (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T13:53:13.872 INFO:teuthology.orchestra.run.vm00.stdout:Processing triggers for libc-bin (2.35-0ubuntu3.13) ... 2026-03-10T13:53:14.018 INFO:teuthology.orchestra.run.vm08.stdout:The following packages were automatically installed and are no longer required: 2026-03-10T13:53:14.018 INFO:teuthology.orchestra.run.vm08.stdout: ceph-mgr-modules-core jq kpartx libboost-iostreams1.74.0 2026-03-10T13:53:14.019 INFO:teuthology.orchestra.run.vm08.stdout: libboost-thread1.74.0 libjq1 liblua5.3-dev liboath0 libonig5 libpmemobj1 2026-03-10T13:53:14.019 INFO:teuthology.orchestra.run.vm08.stdout: libradosstriper1 librdkafka1 libreadline-dev librgw2 libsgutils2-2 2026-03-10T13:53:14.019 INFO:teuthology.orchestra.run.vm08.stdout: libsqlite3-mod-ceph lua-any lua-sec lua-socket lua5.1 luarocks nvme-cli 2026-03-10T13:53:14.019 INFO:teuthology.orchestra.run.vm08.stdout: pkg-config python-asyncssh-doc python-pastedeploy-tpl python3-asyncssh 2026-03-10T13:53:14.019 INFO:teuthology.orchestra.run.vm08.stdout: python3-cachetools python3-ceph-argparse python3-ceph-common python3-cheroot 2026-03-10T13:53:14.019 INFO:teuthology.orchestra.run.vm08.stdout: python3-cherrypy3 python3-google-auth python3-jaraco.classes 2026-03-10T13:53:14.019 INFO:teuthology.orchestra.run.vm08.stdout: python3-jaraco.collections python3-jaraco.functools python3-jaraco.text 2026-03-10T13:53:14.019 INFO:teuthology.orchestra.run.vm08.stdout: python3-joblib python3-kubernetes python3-logutils python3-mako 2026-03-10T13:53:14.019 INFO:teuthology.orchestra.run.vm08.stdout: python3-natsort python3-paste python3-pastedeploy python3-pastescript 2026-03-10T13:53:14.019 INFO:teuthology.orchestra.run.vm08.stdout: python3-pecan python3-portend python3-prettytable python3-psutil 2026-03-10T13:53:14.019 INFO:teuthology.orchestra.run.vm08.stdout: python3-pyinotify python3-repoze.lru python3-requests-oauthlib 2026-03-10T13:53:14.019 INFO:teuthology.orchestra.run.vm08.stdout: python3-routes python3-rsa python3-simplegeneric python3-simplejson 2026-03-10T13:53:14.019 INFO:teuthology.orchestra.run.vm08.stdout: python3-singledispatch python3-sklearn python3-sklearn-lib python3-tempita 2026-03-10T13:53:14.019 INFO:teuthology.orchestra.run.vm08.stdout: python3-tempora python3-threadpoolctl python3-waitress python3-wcwidth 2026-03-10T13:53:14.019 INFO:teuthology.orchestra.run.vm08.stdout: python3-webob python3-websocket python3-webtest python3-werkzeug 2026-03-10T13:53:14.020 INFO:teuthology.orchestra.run.vm08.stdout: python3-zc.lockfile sg3-utils sg3-utils-udev smartmontools socat unzip 2026-03-10T13:53:14.020 INFO:teuthology.orchestra.run.vm08.stdout: xmlstarlet zip 2026-03-10T13:53:14.020 INFO:teuthology.orchestra.run.vm08.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-10T13:53:14.029 INFO:teuthology.orchestra.run.vm07.stdout:The following packages were automatically installed and are no longer required: 2026-03-10T13:53:14.030 INFO:teuthology.orchestra.run.vm07.stdout: ceph-mgr-modules-core jq kpartx libboost-iostreams1.74.0 2026-03-10T13:53:14.030 INFO:teuthology.orchestra.run.vm07.stdout: libboost-thread1.74.0 libjq1 liblua5.3-dev liboath0 libonig5 libpmemobj1 2026-03-10T13:53:14.030 INFO:teuthology.orchestra.run.vm07.stdout: libradosstriper1 librdkafka1 libreadline-dev librgw2 libsgutils2-2 2026-03-10T13:53:14.031 INFO:teuthology.orchestra.run.vm07.stdout: libsqlite3-mod-ceph lua-any lua-sec lua-socket lua5.1 luarocks nvme-cli 2026-03-10T13:53:14.031 INFO:teuthology.orchestra.run.vm07.stdout: pkg-config python-asyncssh-doc python-pastedeploy-tpl python3-asyncssh 2026-03-10T13:53:14.031 INFO:teuthology.orchestra.run.vm07.stdout: python3-cachetools python3-ceph-argparse python3-ceph-common python3-cheroot 2026-03-10T13:53:14.031 INFO:teuthology.orchestra.run.vm07.stdout: python3-cherrypy3 python3-google-auth python3-jaraco.classes 2026-03-10T13:53:14.031 INFO:teuthology.orchestra.run.vm07.stdout: python3-jaraco.collections python3-jaraco.functools python3-jaraco.text 2026-03-10T13:53:14.031 INFO:teuthology.orchestra.run.vm07.stdout: python3-joblib python3-kubernetes python3-logutils python3-mako 2026-03-10T13:53:14.031 INFO:teuthology.orchestra.run.vm07.stdout: python3-natsort python3-paste python3-pastedeploy python3-pastescript 2026-03-10T13:53:14.031 INFO:teuthology.orchestra.run.vm07.stdout: python3-pecan python3-portend python3-prettytable python3-psutil 2026-03-10T13:53:14.031 INFO:teuthology.orchestra.run.vm07.stdout: python3-pyinotify python3-repoze.lru python3-requests-oauthlib 2026-03-10T13:53:14.031 INFO:teuthology.orchestra.run.vm07.stdout: python3-routes python3-rsa python3-simplegeneric python3-simplejson 2026-03-10T13:53:14.031 INFO:teuthology.orchestra.run.vm07.stdout: python3-singledispatch python3-sklearn python3-sklearn-lib python3-tempita 2026-03-10T13:53:14.031 INFO:teuthology.orchestra.run.vm07.stdout: python3-tempora python3-threadpoolctl python3-waitress python3-wcwidth 2026-03-10T13:53:14.031 INFO:teuthology.orchestra.run.vm07.stdout: python3-webob python3-websocket python3-webtest python3-werkzeug 2026-03-10T13:53:14.031 INFO:teuthology.orchestra.run.vm07.stdout: python3-zc.lockfile sg3-utils sg3-utils-udev smartmontools socat unzip 2026-03-10T13:53:14.031 INFO:teuthology.orchestra.run.vm07.stdout: xmlstarlet zip 2026-03-10T13:53:14.031 INFO:teuthology.orchestra.run.vm07.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-10T13:53:14.039 INFO:teuthology.orchestra.run.vm08.stdout:The following packages will be REMOVED: 2026-03-10T13:53:14.040 INFO:teuthology.orchestra.run.vm08.stdout: libcephfs-dev* libcephfs2* 2026-03-10T13:53:14.048 INFO:teuthology.orchestra.run.vm07.stdout:The following packages will be REMOVED: 2026-03-10T13:53:14.049 INFO:teuthology.orchestra.run.vm07.stdout: libcephfs-dev* libcephfs2* 2026-03-10T13:53:14.244 INFO:teuthology.orchestra.run.vm07.stdout:0 upgraded, 0 newly installed, 2 to remove and 12 not upgraded. 2026-03-10T13:53:14.244 INFO:teuthology.orchestra.run.vm07.stdout:After this operation, 3202 kB disk space will be freed. 2026-03-10T13:53:14.248 INFO:teuthology.orchestra.run.vm08.stdout:0 upgraded, 0 newly installed, 2 to remove and 12 not upgraded. 2026-03-10T13:53:14.248 INFO:teuthology.orchestra.run.vm08.stdout:After this operation, 3202 kB disk space will be freed. 2026-03-10T13:53:14.282 INFO:teuthology.orchestra.run.vm08.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 117402 files and directories currently installed.) 2026-03-10T13:53:14.282 INFO:teuthology.orchestra.run.vm07.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 117402 files and directories currently installed.) 2026-03-10T13:53:14.284 INFO:teuthology.orchestra.run.vm08.stdout:Removing libcephfs-dev (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T13:53:14.284 INFO:teuthology.orchestra.run.vm07.stdout:Removing libcephfs-dev (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T13:53:14.295 INFO:teuthology.orchestra.run.vm08.stdout:Removing libcephfs2 (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T13:53:14.297 INFO:teuthology.orchestra.run.vm07.stdout:Removing libcephfs2 (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T13:53:14.322 INFO:teuthology.orchestra.run.vm08.stdout:Processing triggers for libc-bin (2.35-0ubuntu3.13) ... 2026-03-10T13:53:14.330 INFO:teuthology.orchestra.run.vm07.stdout:Processing triggers for libc-bin (2.35-0ubuntu3.13) ... 2026-03-10T13:53:15.140 INFO:teuthology.orchestra.run.vm00.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-10T13:53:15.175 INFO:teuthology.orchestra.run.vm00.stdout:Reading package lists... 2026-03-10T13:53:15.410 INFO:teuthology.orchestra.run.vm00.stdout:Building dependency tree... 2026-03-10T13:53:15.410 INFO:teuthology.orchestra.run.vm00.stdout:Reading state information... 2026-03-10T13:53:15.526 INFO:teuthology.orchestra.run.vm07.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-10T13:53:15.561 INFO:teuthology.orchestra.run.vm07.stdout:Reading package lists... 2026-03-10T13:53:15.590 INFO:teuthology.orchestra.run.vm08.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-10T13:53:15.624 INFO:teuthology.orchestra.run.vm08.stdout:Reading package lists... 2026-03-10T13:53:15.635 INFO:teuthology.orchestra.run.vm00.stdout:Package 'libcephfs-dev' is not installed, so not removed 2026-03-10T13:53:15.635 INFO:teuthology.orchestra.run.vm00.stdout:The following packages were automatically installed and are no longer required: 2026-03-10T13:53:15.635 INFO:teuthology.orchestra.run.vm00.stdout: ceph-mgr-modules-core jq kpartx libboost-iostreams1.74.0 2026-03-10T13:53:15.635 INFO:teuthology.orchestra.run.vm00.stdout: libboost-thread1.74.0 libjq1 liblua5.3-dev liboath0 libonig5 libpmemobj1 2026-03-10T13:53:15.635 INFO:teuthology.orchestra.run.vm00.stdout: libradosstriper1 librdkafka1 libreadline-dev librgw2 libsgutils2-2 2026-03-10T13:53:15.636 INFO:teuthology.orchestra.run.vm00.stdout: libsqlite3-mod-ceph lua-any lua-sec lua-socket lua5.1 luarocks nvme-cli 2026-03-10T13:53:15.636 INFO:teuthology.orchestra.run.vm00.stdout: pkg-config python-asyncssh-doc python-pastedeploy-tpl python3-asyncssh 2026-03-10T13:53:15.636 INFO:teuthology.orchestra.run.vm00.stdout: python3-cachetools python3-ceph-argparse python3-ceph-common python3-cheroot 2026-03-10T13:53:15.636 INFO:teuthology.orchestra.run.vm00.stdout: python3-cherrypy3 python3-google-auth python3-jaraco.classes 2026-03-10T13:53:15.636 INFO:teuthology.orchestra.run.vm00.stdout: python3-jaraco.collections python3-jaraco.functools python3-jaraco.text 2026-03-10T13:53:15.636 INFO:teuthology.orchestra.run.vm00.stdout: python3-joblib python3-kubernetes python3-logutils python3-mako 2026-03-10T13:53:15.636 INFO:teuthology.orchestra.run.vm00.stdout: python3-natsort python3-paste python3-pastedeploy python3-pastescript 2026-03-10T13:53:15.636 INFO:teuthology.orchestra.run.vm00.stdout: python3-pecan python3-portend python3-prettytable python3-psutil 2026-03-10T13:53:15.636 INFO:teuthology.orchestra.run.vm00.stdout: python3-pyinotify python3-repoze.lru python3-requests-oauthlib 2026-03-10T13:53:15.637 INFO:teuthology.orchestra.run.vm00.stdout: python3-routes python3-rsa python3-simplegeneric python3-simplejson 2026-03-10T13:53:15.637 INFO:teuthology.orchestra.run.vm00.stdout: python3-singledispatch python3-sklearn python3-sklearn-lib python3-tempita 2026-03-10T13:53:15.637 INFO:teuthology.orchestra.run.vm00.stdout: python3-tempora python3-threadpoolctl python3-waitress python3-wcwidth 2026-03-10T13:53:15.637 INFO:teuthology.orchestra.run.vm00.stdout: python3-webob python3-websocket python3-webtest python3-werkzeug 2026-03-10T13:53:15.637 INFO:teuthology.orchestra.run.vm00.stdout: python3-zc.lockfile sg3-utils sg3-utils-udev smartmontools socat unzip 2026-03-10T13:53:15.637 INFO:teuthology.orchestra.run.vm00.stdout: xmlstarlet zip 2026-03-10T13:53:15.637 INFO:teuthology.orchestra.run.vm00.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-10T13:53:15.658 INFO:teuthology.orchestra.run.vm07.stdout:Building dependency tree... 2026-03-10T13:53:15.658 INFO:teuthology.orchestra.run.vm07.stdout:Reading state information... 2026-03-10T13:53:15.663 INFO:teuthology.orchestra.run.vm00.stdout:0 upgraded, 0 newly installed, 0 to remove and 12 not upgraded. 2026-03-10T13:53:15.663 INFO:teuthology.orchestra.run.vm00.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-10T13:53:15.695 INFO:teuthology.orchestra.run.vm00.stdout:Reading package lists... 2026-03-10T13:53:15.749 INFO:teuthology.orchestra.run.vm07.stdout:Package 'libcephfs-dev' is not installed, so not removed 2026-03-10T13:53:15.749 INFO:teuthology.orchestra.run.vm07.stdout:The following packages were automatically installed and are no longer required: 2026-03-10T13:53:15.749 INFO:teuthology.orchestra.run.vm07.stdout: ceph-mgr-modules-core jq kpartx libboost-iostreams1.74.0 2026-03-10T13:53:15.749 INFO:teuthology.orchestra.run.vm07.stdout: libboost-thread1.74.0 libjq1 liblua5.3-dev liboath0 libonig5 libpmemobj1 2026-03-10T13:53:15.749 INFO:teuthology.orchestra.run.vm07.stdout: libradosstriper1 librdkafka1 libreadline-dev librgw2 libsgutils2-2 2026-03-10T13:53:15.749 INFO:teuthology.orchestra.run.vm07.stdout: libsqlite3-mod-ceph lua-any lua-sec lua-socket lua5.1 luarocks nvme-cli 2026-03-10T13:53:15.749 INFO:teuthology.orchestra.run.vm07.stdout: pkg-config python-asyncssh-doc python-pastedeploy-tpl python3-asyncssh 2026-03-10T13:53:15.749 INFO:teuthology.orchestra.run.vm07.stdout: python3-cachetools python3-ceph-argparse python3-ceph-common python3-cheroot 2026-03-10T13:53:15.749 INFO:teuthology.orchestra.run.vm07.stdout: python3-cherrypy3 python3-google-auth python3-jaraco.classes 2026-03-10T13:53:15.749 INFO:teuthology.orchestra.run.vm07.stdout: python3-jaraco.collections python3-jaraco.functools python3-jaraco.text 2026-03-10T13:53:15.749 INFO:teuthology.orchestra.run.vm07.stdout: python3-joblib python3-kubernetes python3-logutils python3-mako 2026-03-10T13:53:15.749 INFO:teuthology.orchestra.run.vm07.stdout: python3-natsort python3-paste python3-pastedeploy python3-pastescript 2026-03-10T13:53:15.749 INFO:teuthology.orchestra.run.vm07.stdout: python3-pecan python3-portend python3-prettytable python3-psutil 2026-03-10T13:53:15.749 INFO:teuthology.orchestra.run.vm07.stdout: python3-pyinotify python3-repoze.lru python3-requests-oauthlib 2026-03-10T13:53:15.749 INFO:teuthology.orchestra.run.vm07.stdout: python3-routes python3-rsa python3-simplegeneric python3-simplejson 2026-03-10T13:53:15.749 INFO:teuthology.orchestra.run.vm07.stdout: python3-singledispatch python3-sklearn python3-sklearn-lib python3-tempita 2026-03-10T13:53:15.749 INFO:teuthology.orchestra.run.vm07.stdout: python3-tempora python3-threadpoolctl python3-waitress python3-wcwidth 2026-03-10T13:53:15.749 INFO:teuthology.orchestra.run.vm07.stdout: python3-webob python3-websocket python3-webtest python3-werkzeug 2026-03-10T13:53:15.749 INFO:teuthology.orchestra.run.vm07.stdout: python3-zc.lockfile sg3-utils sg3-utils-udev smartmontools socat unzip 2026-03-10T13:53:15.749 INFO:teuthology.orchestra.run.vm07.stdout: xmlstarlet zip 2026-03-10T13:53:15.749 INFO:teuthology.orchestra.run.vm07.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-10T13:53:15.762 INFO:teuthology.orchestra.run.vm07.stdout:0 upgraded, 0 newly installed, 0 to remove and 12 not upgraded. 2026-03-10T13:53:15.763 INFO:teuthology.orchestra.run.vm07.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-10T13:53:15.802 INFO:teuthology.orchestra.run.vm07.stdout:Reading package lists... 2026-03-10T13:53:15.842 INFO:teuthology.orchestra.run.vm08.stdout:Building dependency tree... 2026-03-10T13:53:15.843 INFO:teuthology.orchestra.run.vm08.stdout:Reading state information... 2026-03-10T13:53:15.900 INFO:teuthology.orchestra.run.vm07.stdout:Building dependency tree... 2026-03-10T13:53:15.901 INFO:teuthology.orchestra.run.vm07.stdout:Reading state information... 2026-03-10T13:53:15.909 INFO:teuthology.orchestra.run.vm00.stdout:Building dependency tree... 2026-03-10T13:53:15.910 INFO:teuthology.orchestra.run.vm00.stdout:Reading state information... 2026-03-10T13:53:15.994 INFO:teuthology.orchestra.run.vm07.stdout:The following packages were automatically installed and are no longer required: 2026-03-10T13:53:15.994 INFO:teuthology.orchestra.run.vm07.stdout: ceph-mgr-modules-core jq kpartx libboost-iostreams1.74.0 2026-03-10T13:53:15.994 INFO:teuthology.orchestra.run.vm07.stdout: libboost-thread1.74.0 libdouble-conversion3 libfuse2 libgfapi0 libgfrpc0 2026-03-10T13:53:15.994 INFO:teuthology.orchestra.run.vm07.stdout: libgfxdr0 libglusterfs0 libiscsi7 libjq1 liblttng-ust1 liblua5.3-dev libnbd0 2026-03-10T13:53:15.994 INFO:teuthology.orchestra.run.vm07.stdout: liboath0 libonig5 libpcre2-16-0 libpmemobj1 libqt5core5a libqt5dbus5 2026-03-10T13:53:15.994 INFO:teuthology.orchestra.run.vm07.stdout: libqt5network5 librdkafka1 libreadline-dev libsgutils2-2 libthrift-0.16.0 2026-03-10T13:53:15.994 INFO:teuthology.orchestra.run.vm07.stdout: lua-any lua-sec lua-socket lua5.1 luarocks nvme-cli pkg-config 2026-03-10T13:53:15.994 INFO:teuthology.orchestra.run.vm07.stdout: python-asyncssh-doc python-pastedeploy-tpl python3-asyncssh 2026-03-10T13:53:15.994 INFO:teuthology.orchestra.run.vm07.stdout: python3-cachetools python3-ceph-argparse python3-ceph-common python3-cheroot 2026-03-10T13:53:15.994 INFO:teuthology.orchestra.run.vm07.stdout: python3-cherrypy3 python3-google-auth python3-jaraco.classes 2026-03-10T13:53:15.994 INFO:teuthology.orchestra.run.vm07.stdout: python3-jaraco.collections python3-jaraco.functools python3-jaraco.text 2026-03-10T13:53:15.994 INFO:teuthology.orchestra.run.vm07.stdout: python3-joblib python3-kubernetes python3-logutils python3-mako 2026-03-10T13:53:15.994 INFO:teuthology.orchestra.run.vm07.stdout: python3-natsort python3-paste python3-pastedeploy python3-pastescript 2026-03-10T13:53:15.994 INFO:teuthology.orchestra.run.vm07.stdout: python3-pecan python3-portend python3-prettytable python3-psutil 2026-03-10T13:53:15.994 INFO:teuthology.orchestra.run.vm07.stdout: python3-pyinotify python3-repoze.lru python3-requests-oauthlib 2026-03-10T13:53:15.994 INFO:teuthology.orchestra.run.vm07.stdout: python3-routes python3-rsa python3-simplegeneric python3-simplejson 2026-03-10T13:53:15.994 INFO:teuthology.orchestra.run.vm07.stdout: python3-singledispatch python3-sklearn python3-sklearn-lib python3-tempita 2026-03-10T13:53:15.994 INFO:teuthology.orchestra.run.vm07.stdout: python3-tempora python3-threadpoolctl python3-waitress python3-wcwidth 2026-03-10T13:53:15.994 INFO:teuthology.orchestra.run.vm07.stdout: python3-webob python3-websocket python3-webtest python3-werkzeug 2026-03-10T13:53:15.994 INFO:teuthology.orchestra.run.vm07.stdout: python3-zc.lockfile qttranslations5-l10n sg3-utils sg3-utils-udev 2026-03-10T13:53:15.994 INFO:teuthology.orchestra.run.vm07.stdout: smartmontools socat unzip xmlstarlet zip 2026-03-10T13:53:15.994 INFO:teuthology.orchestra.run.vm07.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-10T13:53:16.001 INFO:teuthology.orchestra.run.vm07.stdout:The following packages will be REMOVED: 2026-03-10T13:53:16.001 INFO:teuthology.orchestra.run.vm07.stdout: librados2* libradosstriper1* librbd1* librgw2* libsqlite3-mod-ceph* 2026-03-10T13:53:16.001 INFO:teuthology.orchestra.run.vm07.stdout: qemu-block-extra* rbd-fuse* 2026-03-10T13:53:16.108 INFO:teuthology.orchestra.run.vm08.stdout:Package 'libcephfs-dev' is not installed, so not removed 2026-03-10T13:53:16.108 INFO:teuthology.orchestra.run.vm08.stdout:The following packages were automatically installed and are no longer required: 2026-03-10T13:53:16.108 INFO:teuthology.orchestra.run.vm08.stdout: ceph-mgr-modules-core jq kpartx libboost-iostreams1.74.0 2026-03-10T13:53:16.108 INFO:teuthology.orchestra.run.vm08.stdout: libboost-thread1.74.0 libjq1 liblua5.3-dev liboath0 libonig5 libpmemobj1 2026-03-10T13:53:16.108 INFO:teuthology.orchestra.run.vm08.stdout: libradosstriper1 librdkafka1 libreadline-dev librgw2 libsgutils2-2 2026-03-10T13:53:16.109 INFO:teuthology.orchestra.run.vm08.stdout: libsqlite3-mod-ceph lua-any lua-sec lua-socket lua5.1 luarocks nvme-cli 2026-03-10T13:53:16.109 INFO:teuthology.orchestra.run.vm08.stdout: pkg-config python-asyncssh-doc python-pastedeploy-tpl python3-asyncssh 2026-03-10T13:53:16.109 INFO:teuthology.orchestra.run.vm08.stdout: python3-cachetools python3-ceph-argparse python3-ceph-common python3-cheroot 2026-03-10T13:53:16.109 INFO:teuthology.orchestra.run.vm08.stdout: python3-cherrypy3 python3-google-auth python3-jaraco.classes 2026-03-10T13:53:16.109 INFO:teuthology.orchestra.run.vm08.stdout: python3-jaraco.collections python3-jaraco.functools python3-jaraco.text 2026-03-10T13:53:16.109 INFO:teuthology.orchestra.run.vm08.stdout: python3-joblib python3-kubernetes python3-logutils python3-mako 2026-03-10T13:53:16.109 INFO:teuthology.orchestra.run.vm08.stdout: python3-natsort python3-paste python3-pastedeploy python3-pastescript 2026-03-10T13:53:16.109 INFO:teuthology.orchestra.run.vm08.stdout: python3-pecan python3-portend python3-prettytable python3-psutil 2026-03-10T13:53:16.109 INFO:teuthology.orchestra.run.vm08.stdout: python3-pyinotify python3-repoze.lru python3-requests-oauthlib 2026-03-10T13:53:16.110 INFO:teuthology.orchestra.run.vm08.stdout: python3-routes python3-rsa python3-simplegeneric python3-simplejson 2026-03-10T13:53:16.110 INFO:teuthology.orchestra.run.vm08.stdout: python3-singledispatch python3-sklearn python3-sklearn-lib python3-tempita 2026-03-10T13:53:16.110 INFO:teuthology.orchestra.run.vm08.stdout: python3-tempora python3-threadpoolctl python3-waitress python3-wcwidth 2026-03-10T13:53:16.110 INFO:teuthology.orchestra.run.vm08.stdout: python3-webob python3-websocket python3-webtest python3-werkzeug 2026-03-10T13:53:16.110 INFO:teuthology.orchestra.run.vm08.stdout: python3-zc.lockfile sg3-utils sg3-utils-udev smartmontools socat unzip 2026-03-10T13:53:16.110 INFO:teuthology.orchestra.run.vm08.stdout: xmlstarlet zip 2026-03-10T13:53:16.110 INFO:teuthology.orchestra.run.vm08.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-10T13:53:16.139 INFO:teuthology.orchestra.run.vm08.stdout:0 upgraded, 0 newly installed, 0 to remove and 12 not upgraded. 2026-03-10T13:53:16.139 INFO:teuthology.orchestra.run.vm08.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-10T13:53:16.173 INFO:teuthology.orchestra.run.vm08.stdout:Reading package lists... 2026-03-10T13:53:16.180 INFO:teuthology.orchestra.run.vm07.stdout:0 upgraded, 0 newly installed, 7 to remove and 12 not upgraded. 2026-03-10T13:53:16.180 INFO:teuthology.orchestra.run.vm07.stdout:After this operation, 51.6 MB disk space will be freed. 2026-03-10T13:53:16.185 INFO:teuthology.orchestra.run.vm00.stdout:The following packages were automatically installed and are no longer required: 2026-03-10T13:53:16.185 INFO:teuthology.orchestra.run.vm00.stdout: ceph-mgr-modules-core jq kpartx libboost-iostreams1.74.0 2026-03-10T13:53:16.186 INFO:teuthology.orchestra.run.vm00.stdout: libboost-thread1.74.0 libdouble-conversion3 libfuse2 libgfapi0 libgfrpc0 2026-03-10T13:53:16.186 INFO:teuthology.orchestra.run.vm00.stdout: libgfxdr0 libglusterfs0 libiscsi7 libjq1 liblttng-ust1 liblua5.3-dev libnbd0 2026-03-10T13:53:16.186 INFO:teuthology.orchestra.run.vm00.stdout: liboath0 libonig5 libpcre2-16-0 libpmemobj1 libqt5core5a libqt5dbus5 2026-03-10T13:53:16.186 INFO:teuthology.orchestra.run.vm00.stdout: libqt5network5 librdkafka1 libreadline-dev libsgutils2-2 libthrift-0.16.0 2026-03-10T13:53:16.187 INFO:teuthology.orchestra.run.vm00.stdout: lua-any lua-sec lua-socket lua5.1 luarocks nvme-cli pkg-config 2026-03-10T13:53:16.187 INFO:teuthology.orchestra.run.vm00.stdout: python-asyncssh-doc python-pastedeploy-tpl python3-asyncssh 2026-03-10T13:53:16.187 INFO:teuthology.orchestra.run.vm00.stdout: python3-cachetools python3-ceph-argparse python3-ceph-common python3-cheroot 2026-03-10T13:53:16.187 INFO:teuthology.orchestra.run.vm00.stdout: python3-cherrypy3 python3-google-auth python3-jaraco.classes 2026-03-10T13:53:16.187 INFO:teuthology.orchestra.run.vm00.stdout: python3-jaraco.collections python3-jaraco.functools python3-jaraco.text 2026-03-10T13:53:16.187 INFO:teuthology.orchestra.run.vm00.stdout: python3-joblib python3-kubernetes python3-logutils python3-mako 2026-03-10T13:53:16.187 INFO:teuthology.orchestra.run.vm00.stdout: python3-natsort python3-paste python3-pastedeploy python3-pastescript 2026-03-10T13:53:16.187 INFO:teuthology.orchestra.run.vm00.stdout: python3-pecan python3-portend python3-prettytable python3-psutil 2026-03-10T13:53:16.187 INFO:teuthology.orchestra.run.vm00.stdout: python3-pyinotify python3-repoze.lru python3-requests-oauthlib 2026-03-10T13:53:16.187 INFO:teuthology.orchestra.run.vm00.stdout: python3-routes python3-rsa python3-simplegeneric python3-simplejson 2026-03-10T13:53:16.187 INFO:teuthology.orchestra.run.vm00.stdout: python3-singledispatch python3-sklearn python3-sklearn-lib python3-tempita 2026-03-10T13:53:16.187 INFO:teuthology.orchestra.run.vm00.stdout: python3-tempora python3-threadpoolctl python3-waitress python3-wcwidth 2026-03-10T13:53:16.187 INFO:teuthology.orchestra.run.vm00.stdout: python3-webob python3-websocket python3-webtest python3-werkzeug 2026-03-10T13:53:16.187 INFO:teuthology.orchestra.run.vm00.stdout: python3-zc.lockfile qttranslations5-l10n sg3-utils sg3-utils-udev 2026-03-10T13:53:16.187 INFO:teuthology.orchestra.run.vm00.stdout: smartmontools socat unzip xmlstarlet zip 2026-03-10T13:53:16.187 INFO:teuthology.orchestra.run.vm00.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-10T13:53:16.206 INFO:teuthology.orchestra.run.vm00.stdout:The following packages will be REMOVED: 2026-03-10T13:53:16.207 INFO:teuthology.orchestra.run.vm00.stdout: librados2* libradosstriper1* librbd1* librgw2* libsqlite3-mod-ceph* 2026-03-10T13:53:16.207 INFO:teuthology.orchestra.run.vm00.stdout: qemu-block-extra* rbd-fuse* 2026-03-10T13:53:16.224 INFO:teuthology.orchestra.run.vm07.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 117387 files and directories currently installed.) 2026-03-10T13:53:16.227 INFO:teuthology.orchestra.run.vm07.stdout:Removing rbd-fuse (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T13:53:16.242 INFO:teuthology.orchestra.run.vm07.stdout:Removing libsqlite3-mod-ceph (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T13:53:16.255 INFO:teuthology.orchestra.run.vm07.stdout:Removing libradosstriper1 (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T13:53:16.267 INFO:teuthology.orchestra.run.vm07.stdout:Removing qemu-block-extra (1:6.2+dfsg-2ubuntu6.28) ... 2026-03-10T13:53:16.388 INFO:teuthology.orchestra.run.vm08.stdout:Building dependency tree... 2026-03-10T13:53:16.389 INFO:teuthology.orchestra.run.vm08.stdout:Reading state information... 2026-03-10T13:53:16.402 INFO:teuthology.orchestra.run.vm00.stdout:0 upgraded, 0 newly installed, 7 to remove and 12 not upgraded. 2026-03-10T13:53:16.402 INFO:teuthology.orchestra.run.vm00.stdout:After this operation, 51.6 MB disk space will be freed. 2026-03-10T13:53:16.441 INFO:teuthology.orchestra.run.vm00.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 117387 files and directories currently installed.) 2026-03-10T13:53:16.442 INFO:teuthology.orchestra.run.vm00.stdout:Removing rbd-fuse (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T13:53:16.454 INFO:teuthology.orchestra.run.vm00.stdout:Removing libsqlite3-mod-ceph (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T13:53:16.465 INFO:teuthology.orchestra.run.vm00.stdout:Removing libradosstriper1 (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T13:53:16.475 INFO:teuthology.orchestra.run.vm00.stdout:Removing qemu-block-extra (1:6.2+dfsg-2ubuntu6.28) ... 2026-03-10T13:53:16.558 INFO:teuthology.orchestra.run.vm08.stdout:The following packages were automatically installed and are no longer required: 2026-03-10T13:53:16.558 INFO:teuthology.orchestra.run.vm08.stdout: ceph-mgr-modules-core jq kpartx libboost-iostreams1.74.0 2026-03-10T13:53:16.558 INFO:teuthology.orchestra.run.vm08.stdout: libboost-thread1.74.0 libdouble-conversion3 libfuse2 libgfapi0 libgfrpc0 2026-03-10T13:53:16.558 INFO:teuthology.orchestra.run.vm08.stdout: libgfxdr0 libglusterfs0 libiscsi7 libjq1 liblttng-ust1 liblua5.3-dev libnbd0 2026-03-10T13:53:16.558 INFO:teuthology.orchestra.run.vm08.stdout: liboath0 libonig5 libpcre2-16-0 libpmemobj1 libqt5core5a libqt5dbus5 2026-03-10T13:53:16.558 INFO:teuthology.orchestra.run.vm08.stdout: libqt5network5 librdkafka1 libreadline-dev libsgutils2-2 libthrift-0.16.0 2026-03-10T13:53:16.559 INFO:teuthology.orchestra.run.vm08.stdout: lua-any lua-sec lua-socket lua5.1 luarocks nvme-cli pkg-config 2026-03-10T13:53:16.559 INFO:teuthology.orchestra.run.vm08.stdout: python-asyncssh-doc python-pastedeploy-tpl python3-asyncssh 2026-03-10T13:53:16.559 INFO:teuthology.orchestra.run.vm08.stdout: python3-cachetools python3-ceph-argparse python3-ceph-common python3-cheroot 2026-03-10T13:53:16.559 INFO:teuthology.orchestra.run.vm08.stdout: python3-cherrypy3 python3-google-auth python3-jaraco.classes 2026-03-10T13:53:16.559 INFO:teuthology.orchestra.run.vm08.stdout: python3-jaraco.collections python3-jaraco.functools python3-jaraco.text 2026-03-10T13:53:16.559 INFO:teuthology.orchestra.run.vm08.stdout: python3-joblib python3-kubernetes python3-logutils python3-mako 2026-03-10T13:53:16.559 INFO:teuthology.orchestra.run.vm08.stdout: python3-natsort python3-paste python3-pastedeploy python3-pastescript 2026-03-10T13:53:16.559 INFO:teuthology.orchestra.run.vm08.stdout: python3-pecan python3-portend python3-prettytable python3-psutil 2026-03-10T13:53:16.559 INFO:teuthology.orchestra.run.vm08.stdout: python3-pyinotify python3-repoze.lru python3-requests-oauthlib 2026-03-10T13:53:16.559 INFO:teuthology.orchestra.run.vm08.stdout: python3-routes python3-rsa python3-simplegeneric python3-simplejson 2026-03-10T13:53:16.559 INFO:teuthology.orchestra.run.vm08.stdout: python3-singledispatch python3-sklearn python3-sklearn-lib python3-tempita 2026-03-10T13:53:16.559 INFO:teuthology.orchestra.run.vm08.stdout: python3-tempora python3-threadpoolctl python3-waitress python3-wcwidth 2026-03-10T13:53:16.559 INFO:teuthology.orchestra.run.vm08.stdout: python3-webob python3-websocket python3-webtest python3-werkzeug 2026-03-10T13:53:16.559 INFO:teuthology.orchestra.run.vm08.stdout: python3-zc.lockfile qttranslations5-l10n sg3-utils sg3-utils-udev 2026-03-10T13:53:16.559 INFO:teuthology.orchestra.run.vm08.stdout: smartmontools socat unzip xmlstarlet zip 2026-03-10T13:53:16.559 INFO:teuthology.orchestra.run.vm08.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-10T13:53:16.575 INFO:teuthology.orchestra.run.vm08.stdout:The following packages will be REMOVED: 2026-03-10T13:53:16.576 INFO:teuthology.orchestra.run.vm08.stdout: librados2* libradosstriper1* librbd1* librgw2* libsqlite3-mod-ceph* 2026-03-10T13:53:16.576 INFO:teuthology.orchestra.run.vm08.stdout: qemu-block-extra* rbd-fuse* 2026-03-10T13:53:16.684 INFO:teuthology.orchestra.run.vm07.stdout:Removing librbd1 (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T13:53:16.696 INFO:teuthology.orchestra.run.vm07.stdout:Removing librgw2 (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T13:53:16.710 INFO:teuthology.orchestra.run.vm07.stdout:Removing librados2 (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T13:53:16.736 INFO:teuthology.orchestra.run.vm07.stdout:Processing triggers for man-db (2.10.2-1) ... 2026-03-10T13:53:16.741 INFO:teuthology.orchestra.run.vm08.stdout:0 upgraded, 0 newly installed, 7 to remove and 12 not upgraded. 2026-03-10T13:53:16.741 INFO:teuthology.orchestra.run.vm08.stdout:After this operation, 51.6 MB disk space will be freed. 2026-03-10T13:53:16.773 INFO:teuthology.orchestra.run.vm07.stdout:Processing triggers for libc-bin (2.35-0ubuntu3.13) ... 2026-03-10T13:53:16.792 INFO:teuthology.orchestra.run.vm08.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 117387 files and directories currently installed.) 2026-03-10T13:53:16.795 INFO:teuthology.orchestra.run.vm08.stdout:Removing rbd-fuse (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T13:53:16.809 INFO:teuthology.orchestra.run.vm08.stdout:Removing libsqlite3-mod-ceph (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T13:53:16.823 INFO:teuthology.orchestra.run.vm08.stdout:Removing libradosstriper1 (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T13:53:16.834 INFO:teuthology.orchestra.run.vm08.stdout:Removing qemu-block-extra (1:6.2+dfsg-2ubuntu6.28) ... 2026-03-10T13:53:16.844 INFO:teuthology.orchestra.run.vm07.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 117336 files and directories currently installed.) 2026-03-10T13:53:16.846 INFO:teuthology.orchestra.run.vm07.stdout:Purging configuration files for qemu-block-extra (1:6.2+dfsg-2ubuntu6.28) ... 2026-03-10T13:53:16.871 INFO:teuthology.orchestra.run.vm00.stdout:Removing librbd1 (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T13:53:16.884 INFO:teuthology.orchestra.run.vm00.stdout:Removing librgw2 (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T13:53:16.902 INFO:teuthology.orchestra.run.vm00.stdout:Removing librados2 (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T13:53:16.931 INFO:teuthology.orchestra.run.vm00.stdout:Processing triggers for man-db (2.10.2-1) ... 2026-03-10T13:53:16.967 INFO:teuthology.orchestra.run.vm00.stdout:Processing triggers for libc-bin (2.35-0ubuntu3.13) ... 2026-03-10T13:53:17.043 INFO:teuthology.orchestra.run.vm00.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 117336 files and directories currently installed.) 2026-03-10T13:53:17.045 INFO:teuthology.orchestra.run.vm00.stdout:Purging configuration files for qemu-block-extra (1:6.2+dfsg-2ubuntu6.28) ... 2026-03-10T13:53:17.232 INFO:teuthology.orchestra.run.vm08.stdout:Removing librbd1 (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T13:53:17.246 INFO:teuthology.orchestra.run.vm08.stdout:Removing librgw2 (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T13:53:17.261 INFO:teuthology.orchestra.run.vm08.stdout:Removing librados2 (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T13:53:17.291 INFO:teuthology.orchestra.run.vm08.stdout:Processing triggers for man-db (2.10.2-1) ... 2026-03-10T13:53:17.329 INFO:teuthology.orchestra.run.vm08.stdout:Processing triggers for libc-bin (2.35-0ubuntu3.13) ... 2026-03-10T13:53:17.730 INFO:teuthology.orchestra.run.vm08.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 117336 files and directories currently installed.) 2026-03-10T13:53:17.733 INFO:teuthology.orchestra.run.vm08.stdout:Purging configuration files for qemu-block-extra (1:6.2+dfsg-2ubuntu6.28) ... 2026-03-10T13:53:18.352 INFO:teuthology.orchestra.run.vm07.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-10T13:53:18.388 INFO:teuthology.orchestra.run.vm07.stdout:Reading package lists... 2026-03-10T13:53:18.533 INFO:teuthology.orchestra.run.vm07.stdout:Building dependency tree... 2026-03-10T13:53:18.534 INFO:teuthology.orchestra.run.vm07.stdout:Reading state information... 2026-03-10T13:53:18.773 INFO:teuthology.orchestra.run.vm07.stdout:Package 'librbd1' is not installed, so not removed 2026-03-10T13:53:18.773 INFO:teuthology.orchestra.run.vm07.stdout:The following packages were automatically installed and are no longer required: 2026-03-10T13:53:18.773 INFO:teuthology.orchestra.run.vm07.stdout: ceph-mgr-modules-core jq kpartx libboost-iostreams1.74.0 2026-03-10T13:53:18.773 INFO:teuthology.orchestra.run.vm07.stdout: libboost-thread1.74.0 libdouble-conversion3 libfuse2 libgfapi0 libgfrpc0 2026-03-10T13:53:18.773 INFO:teuthology.orchestra.run.vm07.stdout: libgfxdr0 libglusterfs0 libiscsi7 libjq1 liblttng-ust1 liblua5.3-dev libnbd0 2026-03-10T13:53:18.773 INFO:teuthology.orchestra.run.vm07.stdout: liboath0 libonig5 libpcre2-16-0 libpmemobj1 libqt5core5a libqt5dbus5 2026-03-10T13:53:18.773 INFO:teuthology.orchestra.run.vm07.stdout: libqt5network5 librdkafka1 libreadline-dev libsgutils2-2 libthrift-0.16.0 2026-03-10T13:53:18.773 INFO:teuthology.orchestra.run.vm07.stdout: lua-any lua-sec lua-socket lua5.1 luarocks nvme-cli pkg-config 2026-03-10T13:53:18.773 INFO:teuthology.orchestra.run.vm07.stdout: python-asyncssh-doc python-pastedeploy-tpl python3-asyncssh 2026-03-10T13:53:18.773 INFO:teuthology.orchestra.run.vm07.stdout: python3-cachetools python3-ceph-argparse python3-ceph-common python3-cheroot 2026-03-10T13:53:18.773 INFO:teuthology.orchestra.run.vm07.stdout: python3-cherrypy3 python3-google-auth python3-jaraco.classes 2026-03-10T13:53:18.774 INFO:teuthology.orchestra.run.vm07.stdout: python3-jaraco.collections python3-jaraco.functools python3-jaraco.text 2026-03-10T13:53:18.774 INFO:teuthology.orchestra.run.vm07.stdout: python3-joblib python3-kubernetes python3-logutils python3-mako 2026-03-10T13:53:18.774 INFO:teuthology.orchestra.run.vm07.stdout: python3-natsort python3-paste python3-pastedeploy python3-pastescript 2026-03-10T13:53:18.774 INFO:teuthology.orchestra.run.vm07.stdout: python3-pecan python3-portend python3-prettytable python3-psutil 2026-03-10T13:53:18.774 INFO:teuthology.orchestra.run.vm07.stdout: python3-pyinotify python3-repoze.lru python3-requests-oauthlib 2026-03-10T13:53:18.774 INFO:teuthology.orchestra.run.vm07.stdout: python3-routes python3-rsa python3-simplegeneric python3-simplejson 2026-03-10T13:53:18.774 INFO:teuthology.orchestra.run.vm07.stdout: python3-singledispatch python3-sklearn python3-sklearn-lib python3-tempita 2026-03-10T13:53:18.774 INFO:teuthology.orchestra.run.vm07.stdout: python3-tempora python3-threadpoolctl python3-waitress python3-wcwidth 2026-03-10T13:53:18.774 INFO:teuthology.orchestra.run.vm07.stdout: python3-webob python3-websocket python3-webtest python3-werkzeug 2026-03-10T13:53:18.774 INFO:teuthology.orchestra.run.vm07.stdout: python3-zc.lockfile qttranslations5-l10n sg3-utils sg3-utils-udev 2026-03-10T13:53:18.774 INFO:teuthology.orchestra.run.vm07.stdout: smartmontools socat unzip xmlstarlet zip 2026-03-10T13:53:18.774 INFO:teuthology.orchestra.run.vm07.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-10T13:53:18.799 INFO:teuthology.orchestra.run.vm07.stdout:0 upgraded, 0 newly installed, 0 to remove and 12 not upgraded. 2026-03-10T13:53:18.799 INFO:teuthology.orchestra.run.vm07.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-10T13:53:18.832 INFO:teuthology.orchestra.run.vm07.stdout:Reading package lists... 2026-03-10T13:53:18.935 INFO:teuthology.orchestra.run.vm00.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-10T13:53:18.968 INFO:teuthology.orchestra.run.vm07.stdout:Building dependency tree... 2026-03-10T13:53:18.968 INFO:teuthology.orchestra.run.vm07.stdout:Reading state information... 2026-03-10T13:53:18.968 INFO:teuthology.orchestra.run.vm00.stdout:Reading package lists... 2026-03-10T13:53:19.015 INFO:teuthology.orchestra.run.vm08.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-10T13:53:19.049 INFO:teuthology.orchestra.run.vm08.stdout:Reading package lists... 2026-03-10T13:53:19.152 INFO:teuthology.orchestra.run.vm07.stdout:Package 'rbd-fuse' is not installed, so not removed 2026-03-10T13:53:19.152 INFO:teuthology.orchestra.run.vm07.stdout:The following packages were automatically installed and are no longer required: 2026-03-10T13:53:19.152 INFO:teuthology.orchestra.run.vm07.stdout: ceph-mgr-modules-core jq kpartx libboost-iostreams1.74.0 2026-03-10T13:53:19.152 INFO:teuthology.orchestra.run.vm07.stdout: libboost-thread1.74.0 libdouble-conversion3 libfuse2 libgfapi0 libgfrpc0 2026-03-10T13:53:19.152 INFO:teuthology.orchestra.run.vm07.stdout: libgfxdr0 libglusterfs0 libiscsi7 libjq1 liblttng-ust1 liblua5.3-dev libnbd0 2026-03-10T13:53:19.152 INFO:teuthology.orchestra.run.vm07.stdout: liboath0 libonig5 libpcre2-16-0 libpmemobj1 libqt5core5a libqt5dbus5 2026-03-10T13:53:19.152 INFO:teuthology.orchestra.run.vm07.stdout: libqt5network5 librdkafka1 libreadline-dev libsgutils2-2 libthrift-0.16.0 2026-03-10T13:53:19.152 INFO:teuthology.orchestra.run.vm07.stdout: lua-any lua-sec lua-socket lua5.1 luarocks nvme-cli pkg-config 2026-03-10T13:53:19.152 INFO:teuthology.orchestra.run.vm07.stdout: python-asyncssh-doc python-pastedeploy-tpl python3-asyncssh 2026-03-10T13:53:19.152 INFO:teuthology.orchestra.run.vm07.stdout: python3-cachetools python3-ceph-argparse python3-ceph-common python3-cheroot 2026-03-10T13:53:19.152 INFO:teuthology.orchestra.run.vm07.stdout: python3-cherrypy3 python3-google-auth python3-jaraco.classes 2026-03-10T13:53:19.152 INFO:teuthology.orchestra.run.vm07.stdout: python3-jaraco.collections python3-jaraco.functools python3-jaraco.text 2026-03-10T13:53:19.152 INFO:teuthology.orchestra.run.vm07.stdout: python3-joblib python3-kubernetes python3-logutils python3-mako 2026-03-10T13:53:19.152 INFO:teuthology.orchestra.run.vm07.stdout: python3-natsort python3-paste python3-pastedeploy python3-pastescript 2026-03-10T13:53:19.152 INFO:teuthology.orchestra.run.vm07.stdout: python3-pecan python3-portend python3-prettytable python3-psutil 2026-03-10T13:53:19.152 INFO:teuthology.orchestra.run.vm07.stdout: python3-pyinotify python3-repoze.lru python3-requests-oauthlib 2026-03-10T13:53:19.152 INFO:teuthology.orchestra.run.vm07.stdout: python3-routes python3-rsa python3-simplegeneric python3-simplejson 2026-03-10T13:53:19.152 INFO:teuthology.orchestra.run.vm07.stdout: python3-singledispatch python3-sklearn python3-sklearn-lib python3-tempita 2026-03-10T13:53:19.152 INFO:teuthology.orchestra.run.vm07.stdout: python3-tempora python3-threadpoolctl python3-waitress python3-wcwidth 2026-03-10T13:53:19.152 INFO:teuthology.orchestra.run.vm07.stdout: python3-webob python3-websocket python3-webtest python3-werkzeug 2026-03-10T13:53:19.152 INFO:teuthology.orchestra.run.vm07.stdout: python3-zc.lockfile qttranslations5-l10n sg3-utils sg3-utils-udev 2026-03-10T13:53:19.152 INFO:teuthology.orchestra.run.vm07.stdout: smartmontools socat unzip xmlstarlet zip 2026-03-10T13:53:19.152 INFO:teuthology.orchestra.run.vm07.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-10T13:53:19.166 INFO:teuthology.orchestra.run.vm07.stdout:0 upgraded, 0 newly installed, 0 to remove and 12 not upgraded. 2026-03-10T13:53:19.166 INFO:teuthology.orchestra.run.vm07.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-10T13:53:19.168 DEBUG:teuthology.orchestra.run.vm07:> dpkg -l | grep '^.\(U\|H\)R' | awk '{print $2}' | sudo xargs --no-run-if-empty dpkg -P --force-remove-reinstreq 2026-03-10T13:53:19.186 INFO:teuthology.orchestra.run.vm00.stdout:Building dependency tree... 2026-03-10T13:53:19.186 INFO:teuthology.orchestra.run.vm00.stdout:Reading state information... 2026-03-10T13:53:19.224 DEBUG:teuthology.orchestra.run.vm07:> sudo DEBIAN_FRONTEND=noninteractive apt-get -y --force-yes -o Dpkg::Options::="--force-confdef" -o Dpkg::Options::="--force-confold" autoremove 2026-03-10T13:53:19.243 INFO:teuthology.orchestra.run.vm08.stdout:Building dependency tree... 2026-03-10T13:53:19.244 INFO:teuthology.orchestra.run.vm08.stdout:Reading state information... 2026-03-10T13:53:19.303 INFO:teuthology.orchestra.run.vm07.stdout:Reading package lists... 2026-03-10T13:53:19.371 INFO:teuthology.orchestra.run.vm00.stdout:Package 'librbd1' is not installed, so not removed 2026-03-10T13:53:19.371 INFO:teuthology.orchestra.run.vm00.stdout:The following packages were automatically installed and are no longer required: 2026-03-10T13:53:19.371 INFO:teuthology.orchestra.run.vm00.stdout: ceph-mgr-modules-core jq kpartx libboost-iostreams1.74.0 2026-03-10T13:53:19.371 INFO:teuthology.orchestra.run.vm00.stdout: libboost-thread1.74.0 libdouble-conversion3 libfuse2 libgfapi0 libgfrpc0 2026-03-10T13:53:19.371 INFO:teuthology.orchestra.run.vm00.stdout: libgfxdr0 libglusterfs0 libiscsi7 libjq1 liblttng-ust1 liblua5.3-dev libnbd0 2026-03-10T13:53:19.371 INFO:teuthology.orchestra.run.vm00.stdout: liboath0 libonig5 libpcre2-16-0 libpmemobj1 libqt5core5a libqt5dbus5 2026-03-10T13:53:19.372 INFO:teuthology.orchestra.run.vm00.stdout: libqt5network5 librdkafka1 libreadline-dev libsgutils2-2 libthrift-0.16.0 2026-03-10T13:53:19.372 INFO:teuthology.orchestra.run.vm00.stdout: lua-any lua-sec lua-socket lua5.1 luarocks nvme-cli pkg-config 2026-03-10T13:53:19.372 INFO:teuthology.orchestra.run.vm00.stdout: python-asyncssh-doc python-pastedeploy-tpl python3-asyncssh 2026-03-10T13:53:19.372 INFO:teuthology.orchestra.run.vm00.stdout: python3-cachetools python3-ceph-argparse python3-ceph-common python3-cheroot 2026-03-10T13:53:19.372 INFO:teuthology.orchestra.run.vm00.stdout: python3-cherrypy3 python3-google-auth python3-jaraco.classes 2026-03-10T13:53:19.372 INFO:teuthology.orchestra.run.vm00.stdout: python3-jaraco.collections python3-jaraco.functools python3-jaraco.text 2026-03-10T13:53:19.372 INFO:teuthology.orchestra.run.vm00.stdout: python3-joblib python3-kubernetes python3-logutils python3-mako 2026-03-10T13:53:19.372 INFO:teuthology.orchestra.run.vm00.stdout: python3-natsort python3-paste python3-pastedeploy python3-pastescript 2026-03-10T13:53:19.372 INFO:teuthology.orchestra.run.vm00.stdout: python3-pecan python3-portend python3-prettytable python3-psutil 2026-03-10T13:53:19.372 INFO:teuthology.orchestra.run.vm00.stdout: python3-pyinotify python3-repoze.lru python3-requests-oauthlib 2026-03-10T13:53:19.372 INFO:teuthology.orchestra.run.vm00.stdout: python3-routes python3-rsa python3-simplegeneric python3-simplejson 2026-03-10T13:53:19.372 INFO:teuthology.orchestra.run.vm00.stdout: python3-singledispatch python3-sklearn python3-sklearn-lib python3-tempita 2026-03-10T13:53:19.372 INFO:teuthology.orchestra.run.vm00.stdout: python3-tempora python3-threadpoolctl python3-waitress python3-wcwidth 2026-03-10T13:53:19.372 INFO:teuthology.orchestra.run.vm00.stdout: python3-webob python3-websocket python3-webtest python3-werkzeug 2026-03-10T13:53:19.373 INFO:teuthology.orchestra.run.vm00.stdout: python3-zc.lockfile qttranslations5-l10n sg3-utils sg3-utils-udev 2026-03-10T13:53:19.373 INFO:teuthology.orchestra.run.vm00.stdout: smartmontools socat unzip xmlstarlet zip 2026-03-10T13:53:19.373 INFO:teuthology.orchestra.run.vm00.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-10T13:53:19.399 INFO:teuthology.orchestra.run.vm00.stdout:0 upgraded, 0 newly installed, 0 to remove and 12 not upgraded. 2026-03-10T13:53:19.400 INFO:teuthology.orchestra.run.vm00.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-10T13:53:19.414 INFO:teuthology.orchestra.run.vm08.stdout:Package 'librbd1' is not installed, so not removed 2026-03-10T13:53:19.414 INFO:teuthology.orchestra.run.vm08.stdout:The following packages were automatically installed and are no longer required: 2026-03-10T13:53:19.414 INFO:teuthology.orchestra.run.vm08.stdout: ceph-mgr-modules-core jq kpartx libboost-iostreams1.74.0 2026-03-10T13:53:19.414 INFO:teuthology.orchestra.run.vm08.stdout: libboost-thread1.74.0 libdouble-conversion3 libfuse2 libgfapi0 libgfrpc0 2026-03-10T13:53:19.414 INFO:teuthology.orchestra.run.vm08.stdout: libgfxdr0 libglusterfs0 libiscsi7 libjq1 liblttng-ust1 liblua5.3-dev libnbd0 2026-03-10T13:53:19.414 INFO:teuthology.orchestra.run.vm08.stdout: liboath0 libonig5 libpcre2-16-0 libpmemobj1 libqt5core5a libqt5dbus5 2026-03-10T13:53:19.415 INFO:teuthology.orchestra.run.vm08.stdout: libqt5network5 librdkafka1 libreadline-dev libsgutils2-2 libthrift-0.16.0 2026-03-10T13:53:19.415 INFO:teuthology.orchestra.run.vm08.stdout: lua-any lua-sec lua-socket lua5.1 luarocks nvme-cli pkg-config 2026-03-10T13:53:19.415 INFO:teuthology.orchestra.run.vm08.stdout: python-asyncssh-doc python-pastedeploy-tpl python3-asyncssh 2026-03-10T13:53:19.415 INFO:teuthology.orchestra.run.vm08.stdout: python3-cachetools python3-ceph-argparse python3-ceph-common python3-cheroot 2026-03-10T13:53:19.415 INFO:teuthology.orchestra.run.vm08.stdout: python3-cherrypy3 python3-google-auth python3-jaraco.classes 2026-03-10T13:53:19.415 INFO:teuthology.orchestra.run.vm08.stdout: python3-jaraco.collections python3-jaraco.functools python3-jaraco.text 2026-03-10T13:53:19.415 INFO:teuthology.orchestra.run.vm08.stdout: python3-joblib python3-kubernetes python3-logutils python3-mako 2026-03-10T13:53:19.415 INFO:teuthology.orchestra.run.vm08.stdout: python3-natsort python3-paste python3-pastedeploy python3-pastescript 2026-03-10T13:53:19.415 INFO:teuthology.orchestra.run.vm08.stdout: python3-pecan python3-portend python3-prettytable python3-psutil 2026-03-10T13:53:19.415 INFO:teuthology.orchestra.run.vm08.stdout: python3-pyinotify python3-repoze.lru python3-requests-oauthlib 2026-03-10T13:53:19.415 INFO:teuthology.orchestra.run.vm08.stdout: python3-routes python3-rsa python3-simplegeneric python3-simplejson 2026-03-10T13:53:19.415 INFO:teuthology.orchestra.run.vm08.stdout: python3-singledispatch python3-sklearn python3-sklearn-lib python3-tempita 2026-03-10T13:53:19.415 INFO:teuthology.orchestra.run.vm08.stdout: python3-tempora python3-threadpoolctl python3-waitress python3-wcwidth 2026-03-10T13:53:19.415 INFO:teuthology.orchestra.run.vm08.stdout: python3-webob python3-websocket python3-webtest python3-werkzeug 2026-03-10T13:53:19.415 INFO:teuthology.orchestra.run.vm08.stdout: python3-zc.lockfile qttranslations5-l10n sg3-utils sg3-utils-udev 2026-03-10T13:53:19.415 INFO:teuthology.orchestra.run.vm08.stdout: smartmontools socat unzip xmlstarlet zip 2026-03-10T13:53:19.415 INFO:teuthology.orchestra.run.vm08.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-10T13:53:19.431 INFO:teuthology.orchestra.run.vm08.stdout:0 upgraded, 0 newly installed, 0 to remove and 12 not upgraded. 2026-03-10T13:53:19.432 INFO:teuthology.orchestra.run.vm08.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-10T13:53:19.433 INFO:teuthology.orchestra.run.vm00.stdout:Reading package lists... 2026-03-10T13:53:19.467 INFO:teuthology.orchestra.run.vm08.stdout:Reading package lists... 2026-03-10T13:53:19.488 INFO:teuthology.orchestra.run.vm07.stdout:Building dependency tree... 2026-03-10T13:53:19.488 INFO:teuthology.orchestra.run.vm07.stdout:Reading state information... 2026-03-10T13:53:19.644 INFO:teuthology.orchestra.run.vm00.stdout:Building dependency tree... 2026-03-10T13:53:19.644 INFO:teuthology.orchestra.run.vm00.stdout:Reading state information... 2026-03-10T13:53:19.695 INFO:teuthology.orchestra.run.vm08.stdout:Building dependency tree... 2026-03-10T13:53:19.696 INFO:teuthology.orchestra.run.vm08.stdout:Reading state information... 2026-03-10T13:53:19.700 INFO:teuthology.orchestra.run.vm07.stdout:The following packages will be REMOVED: 2026-03-10T13:53:19.701 INFO:teuthology.orchestra.run.vm07.stdout: ceph-mgr-modules-core jq kpartx libboost-iostreams1.74.0 2026-03-10T13:53:19.701 INFO:teuthology.orchestra.run.vm07.stdout: libboost-thread1.74.0 libdouble-conversion3 libfuse2 libgfapi0 libgfrpc0 2026-03-10T13:53:19.701 INFO:teuthology.orchestra.run.vm07.stdout: libgfxdr0 libglusterfs0 libiscsi7 libjq1 liblttng-ust1 liblua5.3-dev libnbd0 2026-03-10T13:53:19.701 INFO:teuthology.orchestra.run.vm07.stdout: liboath0 libonig5 libpcre2-16-0 libpmemobj1 libqt5core5a libqt5dbus5 2026-03-10T13:53:19.702 INFO:teuthology.orchestra.run.vm07.stdout: libqt5network5 librdkafka1 libreadline-dev libsgutils2-2 libthrift-0.16.0 2026-03-10T13:53:19.702 INFO:teuthology.orchestra.run.vm07.stdout: lua-any lua-sec lua-socket lua5.1 luarocks nvme-cli pkg-config 2026-03-10T13:53:19.702 INFO:teuthology.orchestra.run.vm07.stdout: python-asyncssh-doc python-pastedeploy-tpl python3-asyncssh 2026-03-10T13:53:19.702 INFO:teuthology.orchestra.run.vm07.stdout: python3-cachetools python3-ceph-argparse python3-ceph-common python3-cheroot 2026-03-10T13:53:19.702 INFO:teuthology.orchestra.run.vm07.stdout: python3-cherrypy3 python3-google-auth python3-jaraco.classes 2026-03-10T13:53:19.702 INFO:teuthology.orchestra.run.vm07.stdout: python3-jaraco.collections python3-jaraco.functools python3-jaraco.text 2026-03-10T13:53:19.702 INFO:teuthology.orchestra.run.vm07.stdout: python3-joblib python3-kubernetes python3-logutils python3-mako 2026-03-10T13:53:19.702 INFO:teuthology.orchestra.run.vm07.stdout: python3-natsort python3-paste python3-pastedeploy python3-pastescript 2026-03-10T13:53:19.702 INFO:teuthology.orchestra.run.vm07.stdout: python3-pecan python3-portend python3-prettytable python3-psutil 2026-03-10T13:53:19.702 INFO:teuthology.orchestra.run.vm07.stdout: python3-pyinotify python3-repoze.lru python3-requests-oauthlib 2026-03-10T13:53:19.702 INFO:teuthology.orchestra.run.vm07.stdout: python3-routes python3-rsa python3-simplegeneric python3-simplejson 2026-03-10T13:53:19.702 INFO:teuthology.orchestra.run.vm07.stdout: python3-singledispatch python3-sklearn python3-sklearn-lib python3-tempita 2026-03-10T13:53:19.702 INFO:teuthology.orchestra.run.vm07.stdout: python3-tempora python3-threadpoolctl python3-waitress python3-wcwidth 2026-03-10T13:53:19.702 INFO:teuthology.orchestra.run.vm07.stdout: python3-webob python3-websocket python3-webtest python3-werkzeug 2026-03-10T13:53:19.702 INFO:teuthology.orchestra.run.vm07.stdout: python3-zc.lockfile qttranslations5-l10n sg3-utils sg3-utils-udev 2026-03-10T13:53:19.703 INFO:teuthology.orchestra.run.vm07.stdout: smartmontools socat unzip xmlstarlet zip 2026-03-10T13:53:19.861 INFO:teuthology.orchestra.run.vm08.stdout:Package 'rbd-fuse' is not installed, so not removed 2026-03-10T13:53:19.861 INFO:teuthology.orchestra.run.vm08.stdout:The following packages were automatically installed and are no longer required: 2026-03-10T13:53:19.861 INFO:teuthology.orchestra.run.vm08.stdout: ceph-mgr-modules-core jq kpartx libboost-iostreams1.74.0 2026-03-10T13:53:19.861 INFO:teuthology.orchestra.run.vm08.stdout: libboost-thread1.74.0 libdouble-conversion3 libfuse2 libgfapi0 libgfrpc0 2026-03-10T13:53:19.861 INFO:teuthology.orchestra.run.vm08.stdout: libgfxdr0 libglusterfs0 libiscsi7 libjq1 liblttng-ust1 liblua5.3-dev libnbd0 2026-03-10T13:53:19.861 INFO:teuthology.orchestra.run.vm08.stdout: liboath0 libonig5 libpcre2-16-0 libpmemobj1 libqt5core5a libqt5dbus5 2026-03-10T13:53:19.861 INFO:teuthology.orchestra.run.vm08.stdout: libqt5network5 librdkafka1 libreadline-dev libsgutils2-2 libthrift-0.16.0 2026-03-10T13:53:19.862 INFO:teuthology.orchestra.run.vm08.stdout: lua-any lua-sec lua-socket lua5.1 luarocks nvme-cli pkg-config 2026-03-10T13:53:19.862 INFO:teuthology.orchestra.run.vm08.stdout: python-asyncssh-doc python-pastedeploy-tpl python3-asyncssh 2026-03-10T13:53:19.862 INFO:teuthology.orchestra.run.vm08.stdout: python3-cachetools python3-ceph-argparse python3-ceph-common python3-cheroot 2026-03-10T13:53:19.862 INFO:teuthology.orchestra.run.vm08.stdout: python3-cherrypy3 python3-google-auth python3-jaraco.classes 2026-03-10T13:53:19.862 INFO:teuthology.orchestra.run.vm08.stdout: python3-jaraco.collections python3-jaraco.functools python3-jaraco.text 2026-03-10T13:53:19.862 INFO:teuthology.orchestra.run.vm08.stdout: python3-joblib python3-kubernetes python3-logutils python3-mako 2026-03-10T13:53:19.862 INFO:teuthology.orchestra.run.vm08.stdout: python3-natsort python3-paste python3-pastedeploy python3-pastescript 2026-03-10T13:53:19.862 INFO:teuthology.orchestra.run.vm08.stdout: python3-pecan python3-portend python3-prettytable python3-psutil 2026-03-10T13:53:19.862 INFO:teuthology.orchestra.run.vm08.stdout: python3-pyinotify python3-repoze.lru python3-requests-oauthlib 2026-03-10T13:53:19.862 INFO:teuthology.orchestra.run.vm08.stdout: python3-routes python3-rsa python3-simplegeneric python3-simplejson 2026-03-10T13:53:19.862 INFO:teuthology.orchestra.run.vm08.stdout: python3-singledispatch python3-sklearn python3-sklearn-lib python3-tempita 2026-03-10T13:53:19.862 INFO:teuthology.orchestra.run.vm08.stdout: python3-tempora python3-threadpoolctl python3-waitress python3-wcwidth 2026-03-10T13:53:19.862 INFO:teuthology.orchestra.run.vm08.stdout: python3-webob python3-websocket python3-webtest python3-werkzeug 2026-03-10T13:53:19.862 INFO:teuthology.orchestra.run.vm08.stdout: python3-zc.lockfile qttranslations5-l10n sg3-utils sg3-utils-udev 2026-03-10T13:53:19.862 INFO:teuthology.orchestra.run.vm08.stdout: smartmontools socat unzip xmlstarlet zip 2026-03-10T13:53:19.862 INFO:teuthology.orchestra.run.vm08.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-10T13:53:19.884 INFO:teuthology.orchestra.run.vm00.stdout:Package 'rbd-fuse' is not installed, so not removed 2026-03-10T13:53:19.884 INFO:teuthology.orchestra.run.vm00.stdout:The following packages were automatically installed and are no longer required: 2026-03-10T13:53:19.884 INFO:teuthology.orchestra.run.vm00.stdout: ceph-mgr-modules-core jq kpartx libboost-iostreams1.74.0 2026-03-10T13:53:19.884 INFO:teuthology.orchestra.run.vm00.stdout: libboost-thread1.74.0 libdouble-conversion3 libfuse2 libgfapi0 libgfrpc0 2026-03-10T13:53:19.884 INFO:teuthology.orchestra.run.vm00.stdout: libgfxdr0 libglusterfs0 libiscsi7 libjq1 liblttng-ust1 liblua5.3-dev libnbd0 2026-03-10T13:53:19.884 INFO:teuthology.orchestra.run.vm00.stdout: liboath0 libonig5 libpcre2-16-0 libpmemobj1 libqt5core5a libqt5dbus5 2026-03-10T13:53:19.885 INFO:teuthology.orchestra.run.vm00.stdout: libqt5network5 librdkafka1 libreadline-dev libsgutils2-2 libthrift-0.16.0 2026-03-10T13:53:19.885 INFO:teuthology.orchestra.run.vm00.stdout: lua-any lua-sec lua-socket lua5.1 luarocks nvme-cli pkg-config 2026-03-10T13:53:19.885 INFO:teuthology.orchestra.run.vm00.stdout: python-asyncssh-doc python-pastedeploy-tpl python3-asyncssh 2026-03-10T13:53:19.885 INFO:teuthology.orchestra.run.vm00.stdout: python3-cachetools python3-ceph-argparse python3-ceph-common python3-cheroot 2026-03-10T13:53:19.885 INFO:teuthology.orchestra.run.vm00.stdout: python3-cherrypy3 python3-google-auth python3-jaraco.classes 2026-03-10T13:53:19.885 INFO:teuthology.orchestra.run.vm00.stdout: python3-jaraco.collections python3-jaraco.functools python3-jaraco.text 2026-03-10T13:53:19.885 INFO:teuthology.orchestra.run.vm00.stdout: python3-joblib python3-kubernetes python3-logutils python3-mako 2026-03-10T13:53:19.885 INFO:teuthology.orchestra.run.vm00.stdout: python3-natsort python3-paste python3-pastedeploy python3-pastescript 2026-03-10T13:53:19.885 INFO:teuthology.orchestra.run.vm00.stdout: python3-pecan python3-portend python3-prettytable python3-psutil 2026-03-10T13:53:19.885 INFO:teuthology.orchestra.run.vm00.stdout: python3-pyinotify python3-repoze.lru python3-requests-oauthlib 2026-03-10T13:53:19.885 INFO:teuthology.orchestra.run.vm00.stdout: python3-routes python3-rsa python3-simplegeneric python3-simplejson 2026-03-10T13:53:19.885 INFO:teuthology.orchestra.run.vm00.stdout: python3-singledispatch python3-sklearn python3-sklearn-lib python3-tempita 2026-03-10T13:53:19.885 INFO:teuthology.orchestra.run.vm00.stdout: python3-tempora python3-threadpoolctl python3-waitress python3-wcwidth 2026-03-10T13:53:19.885 INFO:teuthology.orchestra.run.vm00.stdout: python3-webob python3-websocket python3-webtest python3-werkzeug 2026-03-10T13:53:19.886 INFO:teuthology.orchestra.run.vm00.stdout: python3-zc.lockfile qttranslations5-l10n sg3-utils sg3-utils-udev 2026-03-10T13:53:19.886 INFO:teuthology.orchestra.run.vm00.stdout: smartmontools socat unzip xmlstarlet zip 2026-03-10T13:53:19.886 INFO:teuthology.orchestra.run.vm00.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-10T13:53:19.889 INFO:teuthology.orchestra.run.vm08.stdout:0 upgraded, 0 newly installed, 0 to remove and 12 not upgraded. 2026-03-10T13:53:19.889 INFO:teuthology.orchestra.run.vm08.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-10T13:53:19.891 DEBUG:teuthology.orchestra.run.vm08:> dpkg -l | grep '^.\(U\|H\)R' | awk '{print $2}' | sudo xargs --no-run-if-empty dpkg -P --force-remove-reinstreq 2026-03-10T13:53:19.907 INFO:teuthology.orchestra.run.vm07.stdout:0 upgraded, 0 newly installed, 87 to remove and 12 not upgraded. 2026-03-10T13:53:19.907 INFO:teuthology.orchestra.run.vm07.stdout:After this operation, 107 MB disk space will be freed. 2026-03-10T13:53:19.913 INFO:teuthology.orchestra.run.vm00.stdout:0 upgraded, 0 newly installed, 0 to remove and 12 not upgraded. 2026-03-10T13:53:19.913 INFO:teuthology.orchestra.run.vm00.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-10T13:53:19.915 DEBUG:teuthology.orchestra.run.vm00:> dpkg -l | grep '^.\(U\|H\)R' | awk '{print $2}' | sudo xargs --no-run-if-empty dpkg -P --force-remove-reinstreq 2026-03-10T13:53:19.945 DEBUG:teuthology.orchestra.run.vm08:> sudo DEBIAN_FRONTEND=noninteractive apt-get -y --force-yes -o Dpkg::Options::="--force-confdef" -o Dpkg::Options::="--force-confold" autoremove 2026-03-10T13:53:19.951 INFO:teuthology.orchestra.run.vm07.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 117336 files and directories currently installed.) 2026-03-10T13:53:19.954 INFO:teuthology.orchestra.run.vm07.stdout:Removing ceph-mgr-modules-core (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T13:53:19.970 INFO:teuthology.orchestra.run.vm07.stdout:Removing jq (1.6-2.1ubuntu3.1) ... 2026-03-10T13:53:19.974 DEBUG:teuthology.orchestra.run.vm00:> sudo DEBIAN_FRONTEND=noninteractive apt-get -y --force-yes -o Dpkg::Options::="--force-confdef" -o Dpkg::Options::="--force-confold" autoremove 2026-03-10T13:53:19.981 INFO:teuthology.orchestra.run.vm07.stdout:Removing kpartx (0.8.8-1ubuntu1.22.04.4) ... 2026-03-10T13:53:19.992 INFO:teuthology.orchestra.run.vm07.stdout:Removing libboost-iostreams1.74.0:amd64 (1.74.0-14ubuntu3) ... 2026-03-10T13:53:20.005 INFO:teuthology.orchestra.run.vm07.stdout:Removing libboost-thread1.74.0:amd64 (1.74.0-14ubuntu3) ... 2026-03-10T13:53:20.018 INFO:teuthology.orchestra.run.vm07.stdout:Removing libthrift-0.16.0:amd64 (0.16.0-2) ... 2026-03-10T13:53:20.024 INFO:teuthology.orchestra.run.vm08.stdout:Reading package lists... 2026-03-10T13:53:20.030 INFO:teuthology.orchestra.run.vm07.stdout:Removing libqt5network5:amd64 (5.15.3+dfsg-2ubuntu0.2) ... 2026-03-10T13:53:20.042 INFO:teuthology.orchestra.run.vm07.stdout:Removing libqt5dbus5:amd64 (5.15.3+dfsg-2ubuntu0.2) ... 2026-03-10T13:53:20.051 INFO:teuthology.orchestra.run.vm00.stdout:Reading package lists... 2026-03-10T13:53:20.053 INFO:teuthology.orchestra.run.vm07.stdout:Removing libqt5core5a:amd64 (5.15.3+dfsg-2ubuntu0.2) ... 2026-03-10T13:53:20.074 INFO:teuthology.orchestra.run.vm07.stdout:Removing libdouble-conversion3:amd64 (3.1.7-4) ... 2026-03-10T13:53:20.089 INFO:teuthology.orchestra.run.vm07.stdout:Removing libfuse2:amd64 (2.9.9-5ubuntu3) ... 2026-03-10T13:53:20.100 INFO:teuthology.orchestra.run.vm07.stdout:Removing libgfapi0:amd64 (10.1-1ubuntu0.2) ... 2026-03-10T13:53:20.112 INFO:teuthology.orchestra.run.vm07.stdout:Removing libgfrpc0:amd64 (10.1-1ubuntu0.2) ... 2026-03-10T13:53:20.123 INFO:teuthology.orchestra.run.vm07.stdout:Removing libgfxdr0:amd64 (10.1-1ubuntu0.2) ... 2026-03-10T13:53:20.134 INFO:teuthology.orchestra.run.vm07.stdout:Removing libglusterfs0:amd64 (10.1-1ubuntu0.2) ... 2026-03-10T13:53:20.144 INFO:teuthology.orchestra.run.vm07.stdout:Removing libiscsi7:amd64 (1.19.0-3build2) ... 2026-03-10T13:53:20.154 INFO:teuthology.orchestra.run.vm07.stdout:Removing libjq1:amd64 (1.6-2.1ubuntu3.1) ... 2026-03-10T13:53:20.164 INFO:teuthology.orchestra.run.vm07.stdout:Removing liblttng-ust1:amd64 (2.13.1-1ubuntu1) ... 2026-03-10T13:53:20.175 INFO:teuthology.orchestra.run.vm07.stdout:Removing luarocks (3.8.0+dfsg1-1) ... 2026-03-10T13:53:20.182 INFO:teuthology.orchestra.run.vm08.stdout:Building dependency tree... 2026-03-10T13:53:20.182 INFO:teuthology.orchestra.run.vm08.stdout:Reading state information... 2026-03-10T13:53:20.200 INFO:teuthology.orchestra.run.vm07.stdout:Removing liblua5.3-dev:amd64 (5.3.6-1build1) ... 2026-03-10T13:53:20.211 INFO:teuthology.orchestra.run.vm07.stdout:Removing libnbd0 (1.10.5-1) ... 2026-03-10T13:53:20.222 INFO:teuthology.orchestra.run.vm07.stdout:Removing liboath0:amd64 (2.6.7-3ubuntu0.1) ... 2026-03-10T13:53:20.224 INFO:teuthology.orchestra.run.vm00.stdout:Building dependency tree... 2026-03-10T13:53:20.225 INFO:teuthology.orchestra.run.vm00.stdout:Reading state information... 2026-03-10T13:53:20.233 INFO:teuthology.orchestra.run.vm07.stdout:Removing libonig5:amd64 (6.9.7.1-2build1) ... 2026-03-10T13:53:20.243 INFO:teuthology.orchestra.run.vm07.stdout:Removing libpcre2-16-0:amd64 (10.39-3ubuntu0.1) ... 2026-03-10T13:53:20.254 INFO:teuthology.orchestra.run.vm07.stdout:Removing libpmemobj1:amd64 (1.11.1-3build1) ... 2026-03-10T13:53:20.264 INFO:teuthology.orchestra.run.vm07.stdout:Removing librdkafka1:amd64 (1.8.0-1build1) ... 2026-03-10T13:53:20.278 INFO:teuthology.orchestra.run.vm07.stdout:Removing libreadline-dev:amd64 (8.1.2-1) ... 2026-03-10T13:53:20.294 INFO:teuthology.orchestra.run.vm07.stdout:Removing sg3-utils-udev (1.46-1ubuntu0.22.04.1) ... 2026-03-10T13:53:20.302 INFO:teuthology.orchestra.run.vm07.stdout:update-initramfs: deferring update (trigger activated) 2026-03-10T13:53:20.313 INFO:teuthology.orchestra.run.vm07.stdout:Removing sg3-utils (1.46-1ubuntu0.22.04.1) ... 2026-03-10T13:53:20.346 INFO:teuthology.orchestra.run.vm07.stdout:Removing libsgutils2-2:amd64 (1.46-1ubuntu0.22.04.1) ... 2026-03-10T13:53:20.357 INFO:teuthology.orchestra.run.vm08.stdout:The following packages will be REMOVED: 2026-03-10T13:53:20.357 INFO:teuthology.orchestra.run.vm08.stdout: ceph-mgr-modules-core jq kpartx libboost-iostreams1.74.0 2026-03-10T13:53:20.357 INFO:teuthology.orchestra.run.vm08.stdout: libboost-thread1.74.0 libdouble-conversion3 libfuse2 libgfapi0 libgfrpc0 2026-03-10T13:53:20.357 INFO:teuthology.orchestra.run.vm08.stdout: libgfxdr0 libglusterfs0 libiscsi7 libjq1 liblttng-ust1 liblua5.3-dev libnbd0 2026-03-10T13:53:20.357 INFO:teuthology.orchestra.run.vm08.stdout: liboath0 libonig5 libpcre2-16-0 libpmemobj1 libqt5core5a libqt5dbus5 2026-03-10T13:53:20.357 INFO:teuthology.orchestra.run.vm08.stdout: libqt5network5 librdkafka1 libreadline-dev libsgutils2-2 libthrift-0.16.0 2026-03-10T13:53:20.357 INFO:teuthology.orchestra.run.vm08.stdout: lua-any lua-sec lua-socket lua5.1 luarocks nvme-cli pkg-config 2026-03-10T13:53:20.358 INFO:teuthology.orchestra.run.vm08.stdout: python-asyncssh-doc python-pastedeploy-tpl python3-asyncssh 2026-03-10T13:53:20.358 INFO:teuthology.orchestra.run.vm08.stdout: python3-cachetools python3-ceph-argparse python3-ceph-common python3-cheroot 2026-03-10T13:53:20.358 INFO:teuthology.orchestra.run.vm08.stdout: python3-cherrypy3 python3-google-auth python3-jaraco.classes 2026-03-10T13:53:20.358 INFO:teuthology.orchestra.run.vm08.stdout: python3-jaraco.collections python3-jaraco.functools python3-jaraco.text 2026-03-10T13:53:20.358 INFO:teuthology.orchestra.run.vm08.stdout: python3-joblib python3-kubernetes python3-logutils python3-mako 2026-03-10T13:53:20.358 INFO:teuthology.orchestra.run.vm08.stdout: python3-natsort python3-paste python3-pastedeploy python3-pastescript 2026-03-10T13:53:20.358 INFO:teuthology.orchestra.run.vm08.stdout: python3-pecan python3-portend python3-prettytable python3-psutil 2026-03-10T13:53:20.358 INFO:teuthology.orchestra.run.vm08.stdout: python3-pyinotify python3-repoze.lru python3-requests-oauthlib 2026-03-10T13:53:20.358 INFO:teuthology.orchestra.run.vm08.stdout: python3-routes python3-rsa python3-simplegeneric python3-simplejson 2026-03-10T13:53:20.358 INFO:teuthology.orchestra.run.vm08.stdout: python3-singledispatch python3-sklearn python3-sklearn-lib python3-tempita 2026-03-10T13:53:20.358 INFO:teuthology.orchestra.run.vm08.stdout: python3-tempora python3-threadpoolctl python3-waitress python3-wcwidth 2026-03-10T13:53:20.358 INFO:teuthology.orchestra.run.vm08.stdout: python3-webob python3-websocket python3-webtest python3-werkzeug 2026-03-10T13:53:20.358 INFO:teuthology.orchestra.run.vm08.stdout: python3-zc.lockfile qttranslations5-l10n sg3-utils sg3-utils-udev 2026-03-10T13:53:20.358 INFO:teuthology.orchestra.run.vm08.stdout: smartmontools socat unzip xmlstarlet zip 2026-03-10T13:53:20.374 INFO:teuthology.orchestra.run.vm07.stdout:Removing lua-any (27ubuntu1) ... 2026-03-10T13:53:20.383 INFO:teuthology.orchestra.run.vm00.stdout:The following packages will be REMOVED: 2026-03-10T13:53:20.384 INFO:teuthology.orchestra.run.vm00.stdout: ceph-mgr-modules-core jq kpartx libboost-iostreams1.74.0 2026-03-10T13:53:20.384 INFO:teuthology.orchestra.run.vm00.stdout: libboost-thread1.74.0 libdouble-conversion3 libfuse2 libgfapi0 libgfrpc0 2026-03-10T13:53:20.384 INFO:teuthology.orchestra.run.vm00.stdout: libgfxdr0 libglusterfs0 libiscsi7 libjq1 liblttng-ust1 liblua5.3-dev libnbd0 2026-03-10T13:53:20.384 INFO:teuthology.orchestra.run.vm00.stdout: liboath0 libonig5 libpcre2-16-0 libpmemobj1 libqt5core5a libqt5dbus5 2026-03-10T13:53:20.384 INFO:teuthology.orchestra.run.vm00.stdout: libqt5network5 librdkafka1 libreadline-dev libsgutils2-2 libthrift-0.16.0 2026-03-10T13:53:20.384 INFO:teuthology.orchestra.run.vm00.stdout: lua-any lua-sec lua-socket lua5.1 luarocks nvme-cli pkg-config 2026-03-10T13:53:20.384 INFO:teuthology.orchestra.run.vm00.stdout: python-asyncssh-doc python-pastedeploy-tpl python3-asyncssh 2026-03-10T13:53:20.384 INFO:teuthology.orchestra.run.vm00.stdout: python3-cachetools python3-ceph-argparse python3-ceph-common python3-cheroot 2026-03-10T13:53:20.384 INFO:teuthology.orchestra.run.vm00.stdout: python3-cherrypy3 python3-google-auth python3-jaraco.classes 2026-03-10T13:53:20.384 INFO:teuthology.orchestra.run.vm00.stdout: python3-jaraco.collections python3-jaraco.functools python3-jaraco.text 2026-03-10T13:53:20.385 INFO:teuthology.orchestra.run.vm00.stdout: python3-joblib python3-kubernetes python3-logutils python3-mako 2026-03-10T13:53:20.385 INFO:teuthology.orchestra.run.vm00.stdout: python3-natsort python3-paste python3-pastedeploy python3-pastescript 2026-03-10T13:53:20.385 INFO:teuthology.orchestra.run.vm00.stdout: python3-pecan python3-portend python3-prettytable python3-psutil 2026-03-10T13:53:20.385 INFO:teuthology.orchestra.run.vm00.stdout: python3-pyinotify python3-repoze.lru python3-requests-oauthlib 2026-03-10T13:53:20.385 INFO:teuthology.orchestra.run.vm00.stdout: python3-routes python3-rsa python3-simplegeneric python3-simplejson 2026-03-10T13:53:20.385 INFO:teuthology.orchestra.run.vm00.stdout: python3-singledispatch python3-sklearn python3-sklearn-lib python3-tempita 2026-03-10T13:53:20.385 INFO:teuthology.orchestra.run.vm00.stdout: python3-tempora python3-threadpoolctl python3-waitress python3-wcwidth 2026-03-10T13:53:20.385 INFO:teuthology.orchestra.run.vm00.stdout: python3-webob python3-websocket python3-webtest python3-werkzeug 2026-03-10T13:53:20.385 INFO:teuthology.orchestra.run.vm00.stdout: python3-zc.lockfile qttranslations5-l10n sg3-utils sg3-utils-udev 2026-03-10T13:53:20.385 INFO:teuthology.orchestra.run.vm00.stdout: smartmontools socat unzip xmlstarlet zip 2026-03-10T13:53:20.387 INFO:teuthology.orchestra.run.vm07.stdout:Removing lua-sec:amd64 (1.0.2-1) ... 2026-03-10T13:53:20.400 INFO:teuthology.orchestra.run.vm07.stdout:Removing lua-socket:amd64 (3.0~rc1+git+ac3201d-6) ... 2026-03-10T13:53:20.414 INFO:teuthology.orchestra.run.vm07.stdout:Removing lua5.1 (5.1.5-8.1build4) ... 2026-03-10T13:53:20.433 INFO:teuthology.orchestra.run.vm07.stdout:Removing nvme-cli (1.16-3ubuntu0.3) ... 2026-03-10T13:53:20.530 INFO:teuthology.orchestra.run.vm08.stdout:0 upgraded, 0 newly installed, 87 to remove and 12 not upgraded. 2026-03-10T13:53:20.530 INFO:teuthology.orchestra.run.vm08.stdout:After this operation, 107 MB disk space will be freed. 2026-03-10T13:53:20.553 INFO:teuthology.orchestra.run.vm00.stdout:0 upgraded, 0 newly installed, 87 to remove and 12 not upgraded. 2026-03-10T13:53:20.553 INFO:teuthology.orchestra.run.vm00.stdout:After this operation, 107 MB disk space will be freed. 2026-03-10T13:53:20.570 INFO:teuthology.orchestra.run.vm08.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 117336 files and directories currently installed.) 2026-03-10T13:53:20.572 INFO:teuthology.orchestra.run.vm08.stdout:Removing ceph-mgr-modules-core (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T13:53:20.589 INFO:teuthology.orchestra.run.vm08.stdout:Removing jq (1.6-2.1ubuntu3.1) ... 2026-03-10T13:53:20.593 INFO:teuthology.orchestra.run.vm00.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 117336 files and directories currently installed.) 2026-03-10T13:53:20.595 INFO:teuthology.orchestra.run.vm00.stdout:Removing ceph-mgr-modules-core (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T13:53:20.601 INFO:teuthology.orchestra.run.vm08.stdout:Removing kpartx (0.8.8-1ubuntu1.22.04.4) ... 2026-03-10T13:53:20.614 INFO:teuthology.orchestra.run.vm08.stdout:Removing libboost-iostreams1.74.0:amd64 (1.74.0-14ubuntu3) ... 2026-03-10T13:53:20.614 INFO:teuthology.orchestra.run.vm00.stdout:Removing jq (1.6-2.1ubuntu3.1) ... 2026-03-10T13:53:20.627 INFO:teuthology.orchestra.run.vm08.stdout:Removing libboost-thread1.74.0:amd64 (1.74.0-14ubuntu3) ... 2026-03-10T13:53:20.627 INFO:teuthology.orchestra.run.vm00.stdout:Removing kpartx (0.8.8-1ubuntu1.22.04.4) ... 2026-03-10T13:53:20.640 INFO:teuthology.orchestra.run.vm08.stdout:Removing libthrift-0.16.0:amd64 (0.16.0-2) ... 2026-03-10T13:53:20.640 INFO:teuthology.orchestra.run.vm00.stdout:Removing libboost-iostreams1.74.0:amd64 (1.74.0-14ubuntu3) ... 2026-03-10T13:53:20.652 INFO:teuthology.orchestra.run.vm08.stdout:Removing libqt5network5:amd64 (5.15.3+dfsg-2ubuntu0.2) ... 2026-03-10T13:53:20.653 INFO:teuthology.orchestra.run.vm00.stdout:Removing libboost-thread1.74.0:amd64 (1.74.0-14ubuntu3) ... 2026-03-10T13:53:20.665 INFO:teuthology.orchestra.run.vm08.stdout:Removing libqt5dbus5:amd64 (5.15.3+dfsg-2ubuntu0.2) ... 2026-03-10T13:53:20.666 INFO:teuthology.orchestra.run.vm00.stdout:Removing libthrift-0.16.0:amd64 (0.16.0-2) ... 2026-03-10T13:53:20.677 INFO:teuthology.orchestra.run.vm08.stdout:Removing libqt5core5a:amd64 (5.15.3+dfsg-2ubuntu0.2) ... 2026-03-10T13:53:20.678 INFO:teuthology.orchestra.run.vm00.stdout:Removing libqt5network5:amd64 (5.15.3+dfsg-2ubuntu0.2) ... 2026-03-10T13:53:20.692 INFO:teuthology.orchestra.run.vm00.stdout:Removing libqt5dbus5:amd64 (5.15.3+dfsg-2ubuntu0.2) ... 2026-03-10T13:53:20.699 INFO:teuthology.orchestra.run.vm08.stdout:Removing libdouble-conversion3:amd64 (3.1.7-4) ... 2026-03-10T13:53:20.704 INFO:teuthology.orchestra.run.vm00.stdout:Removing libqt5core5a:amd64 (5.15.3+dfsg-2ubuntu0.2) ... 2026-03-10T13:53:20.714 INFO:teuthology.orchestra.run.vm08.stdout:Removing libfuse2:amd64 (2.9.9-5ubuntu3) ... 2026-03-10T13:53:20.726 INFO:teuthology.orchestra.run.vm00.stdout:Removing libdouble-conversion3:amd64 (3.1.7-4) ... 2026-03-10T13:53:20.727 INFO:teuthology.orchestra.run.vm08.stdout:Removing libgfapi0:amd64 (10.1-1ubuntu0.2) ... 2026-03-10T13:53:20.740 INFO:teuthology.orchestra.run.vm00.stdout:Removing libfuse2:amd64 (2.9.9-5ubuntu3) ... 2026-03-10T13:53:20.740 INFO:teuthology.orchestra.run.vm08.stdout:Removing libgfrpc0:amd64 (10.1-1ubuntu0.2) ... 2026-03-10T13:53:20.752 INFO:teuthology.orchestra.run.vm08.stdout:Removing libgfxdr0:amd64 (10.1-1ubuntu0.2) ... 2026-03-10T13:53:20.752 INFO:teuthology.orchestra.run.vm00.stdout:Removing libgfapi0:amd64 (10.1-1ubuntu0.2) ... 2026-03-10T13:53:20.764 INFO:teuthology.orchestra.run.vm00.stdout:Removing libgfrpc0:amd64 (10.1-1ubuntu0.2) ... 2026-03-10T13:53:20.765 INFO:teuthology.orchestra.run.vm08.stdout:Removing libglusterfs0:amd64 (10.1-1ubuntu0.2) ... 2026-03-10T13:53:20.777 INFO:teuthology.orchestra.run.vm08.stdout:Removing libiscsi7:amd64 (1.19.0-3build2) ... 2026-03-10T13:53:20.778 INFO:teuthology.orchestra.run.vm00.stdout:Removing libgfxdr0:amd64 (10.1-1ubuntu0.2) ... 2026-03-10T13:53:20.789 INFO:teuthology.orchestra.run.vm00.stdout:Removing libglusterfs0:amd64 (10.1-1ubuntu0.2) ... 2026-03-10T13:53:20.791 INFO:teuthology.orchestra.run.vm08.stdout:Removing libjq1:amd64 (1.6-2.1ubuntu3.1) ... 2026-03-10T13:53:20.801 INFO:teuthology.orchestra.run.vm00.stdout:Removing libiscsi7:amd64 (1.19.0-3build2) ... 2026-03-10T13:53:20.802 INFO:teuthology.orchestra.run.vm08.stdout:Removing liblttng-ust1:amd64 (2.13.1-1ubuntu1) ... 2026-03-10T13:53:20.813 INFO:teuthology.orchestra.run.vm00.stdout:Removing libjq1:amd64 (1.6-2.1ubuntu3.1) ... 2026-03-10T13:53:20.815 INFO:teuthology.orchestra.run.vm08.stdout:Removing luarocks (3.8.0+dfsg1-1) ... 2026-03-10T13:53:20.824 INFO:teuthology.orchestra.run.vm00.stdout:Removing liblttng-ust1:amd64 (2.13.1-1ubuntu1) ... 2026-03-10T13:53:20.837 INFO:teuthology.orchestra.run.vm00.stdout:Removing luarocks (3.8.0+dfsg1-1) ... 2026-03-10T13:53:20.839 INFO:teuthology.orchestra.run.vm07.stdout:Removing pkg-config (0.29.2-1ubuntu3) ... 2026-03-10T13:53:20.840 INFO:teuthology.orchestra.run.vm08.stdout:Removing liblua5.3-dev:amd64 (5.3.6-1build1) ... 2026-03-10T13:53:20.851 INFO:teuthology.orchestra.run.vm08.stdout:Removing libnbd0 (1.10.5-1) ... 2026-03-10T13:53:20.863 INFO:teuthology.orchestra.run.vm08.stdout:Removing liboath0:amd64 (2.6.7-3ubuntu0.1) ... 2026-03-10T13:53:20.864 INFO:teuthology.orchestra.run.vm00.stdout:Removing liblua5.3-dev:amd64 (5.3.6-1build1) ... 2026-03-10T13:53:20.871 INFO:teuthology.orchestra.run.vm07.stdout:Removing python-asyncssh-doc (2.5.0-1ubuntu0.1) ... 2026-03-10T13:53:20.874 INFO:teuthology.orchestra.run.vm08.stdout:Removing libonig5:amd64 (6.9.7.1-2build1) ... 2026-03-10T13:53:20.876 INFO:teuthology.orchestra.run.vm00.stdout:Removing libnbd0 (1.10.5-1) ... 2026-03-10T13:53:20.885 INFO:teuthology.orchestra.run.vm08.stdout:Removing libpcre2-16-0:amd64 (10.39-3ubuntu0.1) ... 2026-03-10T13:53:20.889 INFO:teuthology.orchestra.run.vm00.stdout:Removing liboath0:amd64 (2.6.7-3ubuntu0.1) ... 2026-03-10T13:53:20.897 INFO:teuthology.orchestra.run.vm07.stdout:Removing python3-pecan (1.3.3-4ubuntu2) ... 2026-03-10T13:53:20.899 INFO:teuthology.orchestra.run.vm08.stdout:Removing libpmemobj1:amd64 (1.11.1-3build1) ... 2026-03-10T13:53:20.903 INFO:teuthology.orchestra.run.vm00.stdout:Removing libonig5:amd64 (6.9.7.1-2build1) ... 2026-03-10T13:53:20.910 INFO:teuthology.orchestra.run.vm08.stdout:Removing librdkafka1:amd64 (1.8.0-1build1) ... 2026-03-10T13:53:20.914 INFO:teuthology.orchestra.run.vm00.stdout:Removing libpcre2-16-0:amd64 (10.39-3ubuntu0.1) ... 2026-03-10T13:53:20.921 INFO:teuthology.orchestra.run.vm08.stdout:Removing libreadline-dev:amd64 (8.1.2-1) ... 2026-03-10T13:53:20.925 INFO:teuthology.orchestra.run.vm00.stdout:Removing libpmemobj1:amd64 (1.11.1-3build1) ... 2026-03-10T13:53:20.932 INFO:teuthology.orchestra.run.vm08.stdout:Removing sg3-utils-udev (1.46-1ubuntu0.22.04.1) ... 2026-03-10T13:53:20.938 INFO:teuthology.orchestra.run.vm00.stdout:Removing librdkafka1:amd64 (1.8.0-1build1) ... 2026-03-10T13:53:20.940 INFO:teuthology.orchestra.run.vm08.stdout:update-initramfs: deferring update (trigger activated) 2026-03-10T13:53:20.950 INFO:teuthology.orchestra.run.vm08.stdout:Removing sg3-utils (1.46-1ubuntu0.22.04.1) ... 2026-03-10T13:53:20.950 INFO:teuthology.orchestra.run.vm00.stdout:Removing libreadline-dev:amd64 (8.1.2-1) ... 2026-03-10T13:53:20.957 INFO:teuthology.orchestra.run.vm07.stdout:Removing python3-webtest (2.0.35-1) ... 2026-03-10T13:53:20.962 INFO:teuthology.orchestra.run.vm00.stdout:Removing sg3-utils-udev (1.46-1ubuntu0.22.04.1) ... 2026-03-10T13:53:20.968 INFO:teuthology.orchestra.run.vm08.stdout:Removing libsgutils2-2:amd64 (1.46-1ubuntu0.22.04.1) ... 2026-03-10T13:53:20.971 INFO:teuthology.orchestra.run.vm00.stdout:update-initramfs: deferring update (trigger activated) 2026-03-10T13:53:20.980 INFO:teuthology.orchestra.run.vm08.stdout:Removing lua-any (27ubuntu1) ... 2026-03-10T13:53:20.982 INFO:teuthology.orchestra.run.vm00.stdout:Removing sg3-utils (1.46-1ubuntu0.22.04.1) ... 2026-03-10T13:53:20.991 INFO:teuthology.orchestra.run.vm08.stdout:Removing lua-sec:amd64 (1.0.2-1) ... 2026-03-10T13:53:21.002 INFO:teuthology.orchestra.run.vm00.stdout:Removing libsgutils2-2:amd64 (1.46-1ubuntu0.22.04.1) ... 2026-03-10T13:53:21.003 INFO:teuthology.orchestra.run.vm08.stdout:Removing lua-socket:amd64 (3.0~rc1+git+ac3201d-6) ... 2026-03-10T13:53:21.007 INFO:teuthology.orchestra.run.vm07.stdout:Removing python3-pastescript (2.0.2-4) ... 2026-03-10T13:53:21.015 INFO:teuthology.orchestra.run.vm00.stdout:Removing lua-any (27ubuntu1) ... 2026-03-10T13:53:21.017 INFO:teuthology.orchestra.run.vm08.stdout:Removing lua5.1 (5.1.5-8.1build4) ... 2026-03-10T13:53:21.027 INFO:teuthology.orchestra.run.vm00.stdout:Removing lua-sec:amd64 (1.0.2-1) ... 2026-03-10T13:53:21.037 INFO:teuthology.orchestra.run.vm08.stdout:Removing nvme-cli (1.16-3ubuntu0.3) ... 2026-03-10T13:53:21.039 INFO:teuthology.orchestra.run.vm00.stdout:Removing lua-socket:amd64 (3.0~rc1+git+ac3201d-6) ... 2026-03-10T13:53:21.054 INFO:teuthology.orchestra.run.vm00.stdout:Removing lua5.1 (5.1.5-8.1build4) ... 2026-03-10T13:53:21.062 INFO:teuthology.orchestra.run.vm07.stdout:Removing python3-pastedeploy (2.1.1-1) ... 2026-03-10T13:53:21.071 INFO:teuthology.orchestra.run.vm00.stdout:Removing nvme-cli (1.16-3ubuntu0.3) ... 2026-03-10T13:53:21.120 INFO:teuthology.orchestra.run.vm07.stdout:Removing python-pastedeploy-tpl (2.1.1-1) ... 2026-03-10T13:53:21.132 INFO:teuthology.orchestra.run.vm07.stdout:Removing python3-asyncssh (2.5.0-1ubuntu0.1) ... 2026-03-10T13:53:21.203 INFO:teuthology.orchestra.run.vm07.stdout:Removing python3-kubernetes (12.0.1-1ubuntu1) ... 2026-03-10T13:53:21.449 INFO:teuthology.orchestra.run.vm08.stdout:Removing pkg-config (0.29.2-1ubuntu3) ... 2026-03-10T13:53:21.478 INFO:teuthology.orchestra.run.vm07.stdout:Removing python3-google-auth (1.5.1-3) ... 2026-03-10T13:53:21.479 INFO:teuthology.orchestra.run.vm00.stdout:Removing pkg-config (0.29.2-1ubuntu3) ... 2026-03-10T13:53:21.483 INFO:teuthology.orchestra.run.vm08.stdout:Removing python-asyncssh-doc (2.5.0-1ubuntu0.1) ... 2026-03-10T13:53:21.509 INFO:teuthology.orchestra.run.vm08.stdout:Removing python3-pecan (1.3.3-4ubuntu2) ... 2026-03-10T13:53:21.517 INFO:teuthology.orchestra.run.vm00.stdout:Removing python-asyncssh-doc (2.5.0-1ubuntu0.1) ... 2026-03-10T13:53:21.532 INFO:teuthology.orchestra.run.vm07.stdout:Removing python3-cachetools (5.0.0-1) ... 2026-03-10T13:53:21.544 INFO:teuthology.orchestra.run.vm00.stdout:Removing python3-pecan (1.3.3-4ubuntu2) ... 2026-03-10T13:53:21.570 INFO:teuthology.orchestra.run.vm08.stdout:Removing python3-webtest (2.0.35-1) ... 2026-03-10T13:53:21.584 INFO:teuthology.orchestra.run.vm07.stdout:Removing python3-ceph-argparse (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T13:53:21.609 INFO:teuthology.orchestra.run.vm00.stdout:Removing python3-webtest (2.0.35-1) ... 2026-03-10T13:53:21.622 INFO:teuthology.orchestra.run.vm08.stdout:Removing python3-pastescript (2.0.2-4) ... 2026-03-10T13:53:21.633 INFO:teuthology.orchestra.run.vm07.stdout:Removing python3-ceph-common (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T13:53:21.659 INFO:teuthology.orchestra.run.vm00.stdout:Removing python3-pastescript (2.0.2-4) ... 2026-03-10T13:53:21.676 INFO:teuthology.orchestra.run.vm08.stdout:Removing python3-pastedeploy (2.1.1-1) ... 2026-03-10T13:53:21.689 INFO:teuthology.orchestra.run.vm07.stdout:Removing python3-cherrypy3 (18.6.1-4) ... 2026-03-10T13:53:21.711 INFO:teuthology.orchestra.run.vm00.stdout:Removing python3-pastedeploy (2.1.1-1) ... 2026-03-10T13:53:21.726 INFO:teuthology.orchestra.run.vm08.stdout:Removing python-pastedeploy-tpl (2.1.1-1) ... 2026-03-10T13:53:21.738 INFO:teuthology.orchestra.run.vm08.stdout:Removing python3-asyncssh (2.5.0-1ubuntu0.1) ... 2026-03-10T13:53:21.749 INFO:teuthology.orchestra.run.vm07.stdout:Removing python3-cheroot (8.5.2+ds1-1ubuntu3.1) ... 2026-03-10T13:53:21.754 INFO:teuthology.orchestra.run.vm00.stdout:Removing python-pastedeploy-tpl (2.1.1-1) ... 2026-03-10T13:53:21.764 INFO:teuthology.orchestra.run.vm00.stdout:Removing python3-asyncssh (2.5.0-1ubuntu0.1) ... 2026-03-10T13:53:21.793 INFO:teuthology.orchestra.run.vm08.stdout:Removing python3-kubernetes (12.0.1-1ubuntu1) ... 2026-03-10T13:53:21.801 INFO:teuthology.orchestra.run.vm07.stdout:Removing python3-jaraco.collections (3.4.0-2) ... 2026-03-10T13:53:21.818 INFO:teuthology.orchestra.run.vm00.stdout:Removing python3-kubernetes (12.0.1-1ubuntu1) ... 2026-03-10T13:53:21.848 INFO:teuthology.orchestra.run.vm07.stdout:Removing python3-jaraco.classes (3.2.1-3) ... 2026-03-10T13:53:21.897 INFO:teuthology.orchestra.run.vm07.stdout:Removing python3-portend (3.0.0-1) ... 2026-03-10T13:53:21.944 INFO:teuthology.orchestra.run.vm07.stdout:Removing python3-tempora (4.1.2-1) ... 2026-03-10T13:53:21.992 INFO:teuthology.orchestra.run.vm07.stdout:Removing python3-jaraco.text (3.6.0-2) ... 2026-03-10T13:53:22.039 INFO:teuthology.orchestra.run.vm07.stdout:Removing python3-jaraco.functools (3.4.0-2) ... 2026-03-10T13:53:22.054 INFO:teuthology.orchestra.run.vm08.stdout:Removing python3-google-auth (1.5.1-3) ... 2026-03-10T13:53:22.086 INFO:teuthology.orchestra.run.vm00.stdout:Removing python3-google-auth (1.5.1-3) ... 2026-03-10T13:53:22.088 INFO:teuthology.orchestra.run.vm07.stdout:Removing python3-sklearn (0.23.2-5ubuntu6) ... 2026-03-10T13:53:22.112 INFO:teuthology.orchestra.run.vm08.stdout:Removing python3-cachetools (5.0.0-1) ... 2026-03-10T13:53:22.144 INFO:teuthology.orchestra.run.vm00.stdout:Removing python3-cachetools (5.0.0-1) ... 2026-03-10T13:53:22.161 INFO:teuthology.orchestra.run.vm08.stdout:Removing python3-ceph-argparse (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T13:53:22.192 INFO:teuthology.orchestra.run.vm00.stdout:Removing python3-ceph-argparse (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T13:53:22.208 INFO:teuthology.orchestra.run.vm08.stdout:Removing python3-ceph-common (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T13:53:22.211 INFO:teuthology.orchestra.run.vm07.stdout:Removing python3-joblib (0.17.0-4ubuntu1) ... 2026-03-10T13:53:22.244 INFO:teuthology.orchestra.run.vm00.stdout:Removing python3-ceph-common (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-10T13:53:22.260 INFO:teuthology.orchestra.run.vm08.stdout:Removing python3-cherrypy3 (18.6.1-4) ... 2026-03-10T13:53:22.276 INFO:teuthology.orchestra.run.vm07.stdout:Removing python3-logutils (0.3.3-8) ... 2026-03-10T13:53:22.296 INFO:teuthology.orchestra.run.vm00.stdout:Removing python3-cherrypy3 (18.6.1-4) ... 2026-03-10T13:53:22.322 INFO:teuthology.orchestra.run.vm08.stdout:Removing python3-cheroot (8.5.2+ds1-1ubuntu3.1) ... 2026-03-10T13:53:22.327 INFO:teuthology.orchestra.run.vm07.stdout:Removing python3-mako (1.1.3+ds1-2ubuntu0.1) ... 2026-03-10T13:53:22.358 INFO:teuthology.orchestra.run.vm00.stdout:Removing python3-cheroot (8.5.2+ds1-1ubuntu3.1) ... 2026-03-10T13:53:22.375 INFO:teuthology.orchestra.run.vm08.stdout:Removing python3-jaraco.collections (3.4.0-2) ... 2026-03-10T13:53:22.378 INFO:teuthology.orchestra.run.vm07.stdout:Removing python3-natsort (8.0.2-1) ... 2026-03-10T13:53:22.412 INFO:teuthology.orchestra.run.vm00.stdout:Removing python3-jaraco.collections (3.4.0-2) ... 2026-03-10T13:53:22.425 INFO:teuthology.orchestra.run.vm08.stdout:Removing python3-jaraco.classes (3.2.1-3) ... 2026-03-10T13:53:22.431 INFO:teuthology.orchestra.run.vm07.stdout:Removing python3-paste (3.5.0+dfsg1-1) ... 2026-03-10T13:53:22.463 INFO:teuthology.orchestra.run.vm00.stdout:Removing python3-jaraco.classes (3.2.1-3) ... 2026-03-10T13:53:22.478 INFO:teuthology.orchestra.run.vm08.stdout:Removing python3-portend (3.0.0-1) ... 2026-03-10T13:53:22.492 INFO:teuthology.orchestra.run.vm07.stdout:Removing python3-prettytable (2.5.0-2) ... 2026-03-10T13:53:22.520 INFO:teuthology.orchestra.run.vm00.stdout:Removing python3-portend (3.0.0-1) ... 2026-03-10T13:53:22.530 INFO:teuthology.orchestra.run.vm08.stdout:Removing python3-tempora (4.1.2-1) ... 2026-03-10T13:53:22.542 INFO:teuthology.orchestra.run.vm07.stdout:Removing python3-psutil (5.9.0-1build1) ... 2026-03-10T13:53:22.572 INFO:teuthology.orchestra.run.vm00.stdout:Removing python3-tempora (4.1.2-1) ... 2026-03-10T13:53:22.582 INFO:teuthology.orchestra.run.vm08.stdout:Removing python3-jaraco.text (3.6.0-2) ... 2026-03-10T13:53:22.597 INFO:teuthology.orchestra.run.vm07.stdout:Removing python3-pyinotify (0.9.6-1.3) ... 2026-03-10T13:53:22.625 INFO:teuthology.orchestra.run.vm00.stdout:Removing python3-jaraco.text (3.6.0-2) ... 2026-03-10T13:53:22.631 INFO:teuthology.orchestra.run.vm08.stdout:Removing python3-jaraco.functools (3.4.0-2) ... 2026-03-10T13:53:22.649 INFO:teuthology.orchestra.run.vm07.stdout:Removing python3-routes (2.5.1-1ubuntu1) ... 2026-03-10T13:53:22.675 INFO:teuthology.orchestra.run.vm00.stdout:Removing python3-jaraco.functools (3.4.0-2) ... 2026-03-10T13:53:22.682 INFO:teuthology.orchestra.run.vm08.stdout:Removing python3-sklearn (0.23.2-5ubuntu6) ... 2026-03-10T13:53:22.708 INFO:teuthology.orchestra.run.vm07.stdout:Removing python3-repoze.lru (0.7-2) ... 2026-03-10T13:53:22.723 INFO:teuthology.orchestra.run.vm00.stdout:Removing python3-sklearn (0.23.2-5ubuntu6) ... 2026-03-10T13:53:22.758 INFO:teuthology.orchestra.run.vm07.stdout:Removing python3-requests-oauthlib (1.3.0+ds-0.1) ... 2026-03-10T13:53:22.805 INFO:teuthology.orchestra.run.vm08.stdout:Removing python3-joblib (0.17.0-4ubuntu1) ... 2026-03-10T13:53:22.810 INFO:teuthology.orchestra.run.vm07.stdout:Removing python3-rsa (4.8-1) ... 2026-03-10T13:53:22.851 INFO:teuthology.orchestra.run.vm00.stdout:Removing python3-joblib (0.17.0-4ubuntu1) ... 2026-03-10T13:53:22.861 INFO:teuthology.orchestra.run.vm07.stdout:Removing python3-simplegeneric (0.8.1-3) ... 2026-03-10T13:53:22.866 INFO:teuthology.orchestra.run.vm08.stdout:Removing python3-logutils (0.3.3-8) ... 2026-03-10T13:53:22.910 INFO:teuthology.orchestra.run.vm07.stdout:Removing python3-simplejson (3.17.6-1build1) ... 2026-03-10T13:53:22.912 INFO:teuthology.orchestra.run.vm00.stdout:Removing python3-logutils (0.3.3-8) ... 2026-03-10T13:53:22.915 INFO:teuthology.orchestra.run.vm08.stdout:Removing python3-mako (1.1.3+ds1-2ubuntu0.1) ... 2026-03-10T13:53:22.959 INFO:teuthology.orchestra.run.vm00.stdout:Removing python3-mako (1.1.3+ds1-2ubuntu0.1) ... 2026-03-10T13:53:22.964 INFO:teuthology.orchestra.run.vm07.stdout:Removing python3-singledispatch (3.4.0.3-3) ... 2026-03-10T13:53:22.966 INFO:teuthology.orchestra.run.vm08.stdout:Removing python3-natsort (8.0.2-1) ... 2026-03-10T13:53:23.011 INFO:teuthology.orchestra.run.vm00.stdout:Removing python3-natsort (8.0.2-1) ... 2026-03-10T13:53:23.013 INFO:teuthology.orchestra.run.vm07.stdout:Removing python3-sklearn-lib:amd64 (0.23.2-5ubuntu6) ... 2026-03-10T13:53:23.019 INFO:teuthology.orchestra.run.vm08.stdout:Removing python3-paste (3.5.0+dfsg1-1) ... 2026-03-10T13:53:23.039 INFO:teuthology.orchestra.run.vm07.stdout:Removing python3-tempita (0.5.2-6ubuntu1) ... 2026-03-10T13:53:23.061 INFO:teuthology.orchestra.run.vm00.stdout:Removing python3-paste (3.5.0+dfsg1-1) ... 2026-03-10T13:53:23.078 INFO:teuthology.orchestra.run.vm08.stdout:Removing python3-prettytable (2.5.0-2) ... 2026-03-10T13:53:23.087 INFO:teuthology.orchestra.run.vm07.stdout:Removing python3-threadpoolctl (3.1.0-1) ... 2026-03-10T13:53:23.120 INFO:teuthology.orchestra.run.vm00.stdout:Removing python3-prettytable (2.5.0-2) ... 2026-03-10T13:53:23.126 INFO:teuthology.orchestra.run.vm08.stdout:Removing python3-psutil (5.9.0-1build1) ... 2026-03-10T13:53:23.139 INFO:teuthology.orchestra.run.vm07.stdout:Removing python3-waitress (1.4.4-1.1ubuntu1.1) ... 2026-03-10T13:53:23.174 INFO:teuthology.orchestra.run.vm00.stdout:Removing python3-psutil (5.9.0-1build1) ... 2026-03-10T13:53:23.202 INFO:teuthology.orchestra.run.vm08.stdout:Removing python3-pyinotify (0.9.6-1.3) ... 2026-03-10T13:53:23.208 INFO:teuthology.orchestra.run.vm07.stdout:Removing python3-wcwidth (0.2.5+dfsg1-1) ... 2026-03-10T13:53:23.247 INFO:teuthology.orchestra.run.vm00.stdout:Removing python3-pyinotify (0.9.6-1.3) ... 2026-03-10T13:53:23.255 INFO:teuthology.orchestra.run.vm08.stdout:Removing python3-routes (2.5.1-1ubuntu1) ... 2026-03-10T13:53:23.259 INFO:teuthology.orchestra.run.vm07.stdout:Removing python3-webob (1:1.8.6-1.1ubuntu0.1) ... 2026-03-10T13:53:23.297 INFO:teuthology.orchestra.run.vm00.stdout:Removing python3-routes (2.5.1-1ubuntu1) ... 2026-03-10T13:53:23.305 INFO:teuthology.orchestra.run.vm07.stdout:Removing python3-websocket (1.2.3-1) ... 2026-03-10T13:53:23.311 INFO:teuthology.orchestra.run.vm08.stdout:Removing python3-repoze.lru (0.7-2) ... 2026-03-10T13:53:23.349 INFO:teuthology.orchestra.run.vm00.stdout:Removing python3-repoze.lru (0.7-2) ... 2026-03-10T13:53:23.354 INFO:teuthology.orchestra.run.vm07.stdout:Removing python3-werkzeug (2.0.2+dfsg1-1ubuntu0.22.04.3) ... 2026-03-10T13:53:23.363 INFO:teuthology.orchestra.run.vm08.stdout:Removing python3-requests-oauthlib (1.3.0+ds-0.1) ... 2026-03-10T13:53:23.400 INFO:teuthology.orchestra.run.vm00.stdout:Removing python3-requests-oauthlib (1.3.0+ds-0.1) ... 2026-03-10T13:53:23.414 INFO:teuthology.orchestra.run.vm07.stdout:Removing python3-zc.lockfile (2.0-1) ... 2026-03-10T13:53:23.418 INFO:teuthology.orchestra.run.vm08.stdout:Removing python3-rsa (4.8-1) ... 2026-03-10T13:53:23.452 INFO:teuthology.orchestra.run.vm00.stdout:Removing python3-rsa (4.8-1) ... 2026-03-10T13:53:23.469 INFO:teuthology.orchestra.run.vm07.stdout:Removing qttranslations5-l10n (5.15.3-1) ... 2026-03-10T13:53:23.475 INFO:teuthology.orchestra.run.vm08.stdout:Removing python3-simplegeneric (0.8.1-3) ... 2026-03-10T13:53:23.492 INFO:teuthology.orchestra.run.vm07.stdout:Removing smartmontools (7.2-1ubuntu0.1) ... 2026-03-10T13:53:23.503 INFO:teuthology.orchestra.run.vm00.stdout:Removing python3-simplegeneric (0.8.1-3) ... 2026-03-10T13:53:23.528 INFO:teuthology.orchestra.run.vm08.stdout:Removing python3-simplejson (3.17.6-1build1) ... 2026-03-10T13:53:23.552 INFO:teuthology.orchestra.run.vm00.stdout:Removing python3-simplejson (3.17.6-1build1) ... 2026-03-10T13:53:23.586 INFO:teuthology.orchestra.run.vm08.stdout:Removing python3-singledispatch (3.4.0.3-3) ... 2026-03-10T13:53:23.608 INFO:teuthology.orchestra.run.vm00.stdout:Removing python3-singledispatch (3.4.0.3-3) ... 2026-03-10T13:53:23.638 INFO:teuthology.orchestra.run.vm08.stdout:Removing python3-sklearn-lib:amd64 (0.23.2-5ubuntu6) ... 2026-03-10T13:53:23.660 INFO:teuthology.orchestra.run.vm00.stdout:Removing python3-sklearn-lib:amd64 (0.23.2-5ubuntu6) ... 2026-03-10T13:53:23.664 INFO:teuthology.orchestra.run.vm08.stdout:Removing python3-tempita (0.5.2-6ubuntu1) ... 2026-03-10T13:53:23.687 INFO:teuthology.orchestra.run.vm00.stdout:Removing python3-tempita (0.5.2-6ubuntu1) ... 2026-03-10T13:53:23.714 INFO:teuthology.orchestra.run.vm08.stdout:Removing python3-threadpoolctl (3.1.0-1) ... 2026-03-10T13:53:23.737 INFO:teuthology.orchestra.run.vm00.stdout:Removing python3-threadpoolctl (3.1.0-1) ... 2026-03-10T13:53:23.757 INFO:teuthology.orchestra.run.vm08.stdout:Removing python3-waitress (1.4.4-1.1ubuntu1.1) ... 2026-03-10T13:53:23.783 INFO:teuthology.orchestra.run.vm00.stdout:Removing python3-waitress (1.4.4-1.1ubuntu1.1) ... 2026-03-10T13:53:23.807 INFO:teuthology.orchestra.run.vm08.stdout:Removing python3-wcwidth (0.2.5+dfsg1-1) ... 2026-03-10T13:53:23.830 INFO:teuthology.orchestra.run.vm00.stdout:Removing python3-wcwidth (0.2.5+dfsg1-1) ... 2026-03-10T13:53:23.851 INFO:teuthology.orchestra.run.vm08.stdout:Removing python3-webob (1:1.8.6-1.1ubuntu0.1) ... 2026-03-10T13:53:23.880 INFO:teuthology.orchestra.run.vm00.stdout:Removing python3-webob (1:1.8.6-1.1ubuntu0.1) ... 2026-03-10T13:53:23.903 INFO:teuthology.orchestra.run.vm08.stdout:Removing python3-websocket (1.2.3-1) ... 2026-03-10T13:53:23.928 INFO:teuthology.orchestra.run.vm00.stdout:Removing python3-websocket (1.2.3-1) ... 2026-03-10T13:53:23.946 INFO:teuthology.orchestra.run.vm07.stdout:Removing socat (1.7.4.1-3ubuntu4) ... 2026-03-10T13:53:23.959 INFO:teuthology.orchestra.run.vm08.stdout:Removing python3-werkzeug (2.0.2+dfsg1-1ubuntu0.22.04.3) ... 2026-03-10T13:53:23.959 INFO:teuthology.orchestra.run.vm07.stdout:Removing unzip (6.0-26ubuntu3.2) ... 2026-03-10T13:53:23.980 INFO:teuthology.orchestra.run.vm00.stdout:Removing python3-werkzeug (2.0.2+dfsg1-1ubuntu0.22.04.3) ... 2026-03-10T13:53:23.981 INFO:teuthology.orchestra.run.vm07.stdout:Removing xmlstarlet (1.6.1-2.1) ... 2026-03-10T13:53:24.001 INFO:teuthology.orchestra.run.vm07.stdout:Removing zip (3.0-12build2) ... 2026-03-10T13:53:24.018 INFO:teuthology.orchestra.run.vm08.stdout:Removing python3-zc.lockfile (2.0-1) ... 2026-03-10T13:53:24.028 INFO:teuthology.orchestra.run.vm07.stdout:Processing triggers for libc-bin (2.35-0ubuntu3.13) ... 2026-03-10T13:53:24.033 INFO:teuthology.orchestra.run.vm00.stdout:Removing python3-zc.lockfile (2.0-1) ... 2026-03-10T13:53:24.038 INFO:teuthology.orchestra.run.vm07.stdout:Processing triggers for man-db (2.10.2-1) ... 2026-03-10T13:53:24.067 INFO:teuthology.orchestra.run.vm08.stdout:Removing qttranslations5-l10n (5.15.3-1) ... 2026-03-10T13:53:24.082 INFO:teuthology.orchestra.run.vm00.stdout:Removing qttranslations5-l10n (5.15.3-1) ... 2026-03-10T13:53:24.086 INFO:teuthology.orchestra.run.vm07.stdout:Processing triggers for mailcap (3.70+nmu1ubuntu1) ... 2026-03-10T13:53:24.090 INFO:teuthology.orchestra.run.vm08.stdout:Removing smartmontools (7.2-1ubuntu0.1) ... 2026-03-10T13:53:24.094 INFO:teuthology.orchestra.run.vm07.stdout:Processing triggers for initramfs-tools (0.140ubuntu13.5) ... 2026-03-10T13:53:24.106 INFO:teuthology.orchestra.run.vm00.stdout:Removing smartmontools (7.2-1ubuntu0.1) ... 2026-03-10T13:53:24.116 INFO:teuthology.orchestra.run.vm07.stdout:update-initramfs: Generating /boot/initrd.img-5.15.0-1092-kvm 2026-03-10T13:53:24.514 INFO:teuthology.orchestra.run.vm00.stdout:Removing socat (1.7.4.1-3ubuntu4) ... 2026-03-10T13:53:24.526 INFO:teuthology.orchestra.run.vm00.stdout:Removing unzip (6.0-26ubuntu3.2) ... 2026-03-10T13:53:24.547 INFO:teuthology.orchestra.run.vm00.stdout:Removing xmlstarlet (1.6.1-2.1) ... 2026-03-10T13:53:24.547 INFO:teuthology.orchestra.run.vm08.stdout:Removing socat (1.7.4.1-3ubuntu4) ... 2026-03-10T13:53:24.562 INFO:teuthology.orchestra.run.vm08.stdout:Removing unzip (6.0-26ubuntu3.2) ... 2026-03-10T13:53:24.565 INFO:teuthology.orchestra.run.vm00.stdout:Removing zip (3.0-12build2) ... 2026-03-10T13:53:24.582 INFO:teuthology.orchestra.run.vm08.stdout:Removing xmlstarlet (1.6.1-2.1) ... 2026-03-10T13:53:24.595 INFO:teuthology.orchestra.run.vm00.stdout:Processing triggers for libc-bin (2.35-0ubuntu3.13) ... 2026-03-10T13:53:24.601 INFO:teuthology.orchestra.run.vm08.stdout:Removing zip (3.0-12build2) ... 2026-03-10T13:53:24.606 INFO:teuthology.orchestra.run.vm00.stdout:Processing triggers for man-db (2.10.2-1) ... 2026-03-10T13:53:24.629 INFO:teuthology.orchestra.run.vm08.stdout:Processing triggers for libc-bin (2.35-0ubuntu3.13) ... 2026-03-10T13:53:24.642 INFO:teuthology.orchestra.run.vm08.stdout:Processing triggers for man-db (2.10.2-1) ... 2026-03-10T13:53:24.652 INFO:teuthology.orchestra.run.vm00.stdout:Processing triggers for mailcap (3.70+nmu1ubuntu1) ... 2026-03-10T13:53:24.660 INFO:teuthology.orchestra.run.vm00.stdout:Processing triggers for initramfs-tools (0.140ubuntu13.5) ... 2026-03-10T13:53:24.675 INFO:teuthology.orchestra.run.vm00.stdout:update-initramfs: Generating /boot/initrd.img-5.15.0-1092-kvm 2026-03-10T13:53:24.691 INFO:teuthology.orchestra.run.vm08.stdout:Processing triggers for mailcap (3.70+nmu1ubuntu1) ... 2026-03-10T13:53:24.699 INFO:teuthology.orchestra.run.vm08.stdout:Processing triggers for initramfs-tools (0.140ubuntu13.5) ... 2026-03-10T13:53:24.715 INFO:teuthology.orchestra.run.vm08.stdout:update-initramfs: Generating /boot/initrd.img-5.15.0-1092-kvm 2026-03-10T13:53:25.654 INFO:teuthology.orchestra.run.vm07.stdout:W: mkconf: MD subsystem is not loaded, thus I cannot scan for arrays. 2026-03-10T13:53:25.654 INFO:teuthology.orchestra.run.vm07.stdout:W: mdadm: failed to auto-generate temporary mdadm.conf file. 2026-03-10T13:53:26.234 INFO:teuthology.orchestra.run.vm00.stdout:W: mkconf: MD subsystem is not loaded, thus I cannot scan for arrays. 2026-03-10T13:53:26.235 INFO:teuthology.orchestra.run.vm00.stdout:W: mdadm: failed to auto-generate temporary mdadm.conf file. 2026-03-10T13:53:26.246 INFO:teuthology.orchestra.run.vm08.stdout:W: mkconf: MD subsystem is not loaded, thus I cannot scan for arrays. 2026-03-10T13:53:26.246 INFO:teuthology.orchestra.run.vm08.stdout:W: mdadm: failed to auto-generate temporary mdadm.conf file. 2026-03-10T13:53:27.695 INFO:teuthology.orchestra.run.vm07.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-10T13:53:27.698 DEBUG:teuthology.parallel:result is None 2026-03-10T13:53:28.278 INFO:teuthology.orchestra.run.vm08.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-10T13:53:28.281 DEBUG:teuthology.parallel:result is None 2026-03-10T13:53:28.665 INFO:teuthology.orchestra.run.vm00.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-10T13:53:28.668 DEBUG:teuthology.parallel:result is None 2026-03-10T13:53:28.668 INFO:teuthology.task.install:Removing ceph sources lists on ubuntu@vm00.local 2026-03-10T13:53:28.668 INFO:teuthology.task.install:Removing ceph sources lists on ubuntu@vm07.local 2026-03-10T13:53:28.668 INFO:teuthology.task.install:Removing ceph sources lists on ubuntu@vm08.local 2026-03-10T13:53:28.668 DEBUG:teuthology.orchestra.run.vm00:> sudo rm -f /etc/apt/sources.list.d/ceph.list 2026-03-10T13:53:28.668 DEBUG:teuthology.orchestra.run.vm07:> sudo rm -f /etc/apt/sources.list.d/ceph.list 2026-03-10T13:53:28.668 DEBUG:teuthology.orchestra.run.vm08:> sudo rm -f /etc/apt/sources.list.d/ceph.list 2026-03-10T13:53:28.676 DEBUG:teuthology.orchestra.run.vm07:> sudo apt-get update 2026-03-10T13:53:28.677 DEBUG:teuthology.orchestra.run.vm08:> sudo apt-get update 2026-03-10T13:53:28.720 DEBUG:teuthology.orchestra.run.vm00:> sudo apt-get update 2026-03-10T13:53:28.965 INFO:teuthology.orchestra.run.vm07.stdout:Hit:1 https://security.ubuntu.com/ubuntu jammy-security InRelease 2026-03-10T13:53:28.972 INFO:teuthology.orchestra.run.vm07.stdout:Hit:2 https://archive.ubuntu.com/ubuntu jammy InRelease 2026-03-10T13:53:28.975 INFO:teuthology.orchestra.run.vm08.stdout:Hit:1 https://security.ubuntu.com/ubuntu jammy-security InRelease 2026-03-10T13:53:28.975 INFO:teuthology.orchestra.run.vm08.stdout:Hit:2 https://archive.ubuntu.com/ubuntu jammy InRelease 2026-03-10T13:53:29.006 INFO:teuthology.orchestra.run.vm08.stdout:Hit:3 https://archive.ubuntu.com/ubuntu jammy-updates InRelease 2026-03-10T13:53:29.009 INFO:teuthology.orchestra.run.vm07.stdout:Hit:3 https://archive.ubuntu.com/ubuntu jammy-updates InRelease 2026-03-10T13:53:29.035 INFO:teuthology.orchestra.run.vm00.stdout:Hit:1 https://archive.ubuntu.com/ubuntu jammy InRelease 2026-03-10T13:53:29.043 INFO:teuthology.orchestra.run.vm08.stdout:Hit:4 https://archive.ubuntu.com/ubuntu jammy-backports InRelease 2026-03-10T13:53:29.044 INFO:teuthology.orchestra.run.vm07.stdout:Hit:4 https://archive.ubuntu.com/ubuntu jammy-backports InRelease 2026-03-10T13:53:29.068 INFO:teuthology.orchestra.run.vm00.stdout:Hit:2 https://archive.ubuntu.com/ubuntu jammy-updates InRelease 2026-03-10T13:53:29.105 INFO:teuthology.orchestra.run.vm00.stdout:Hit:3 https://archive.ubuntu.com/ubuntu jammy-backports InRelease 2026-03-10T13:53:29.618 INFO:teuthology.orchestra.run.vm00.stdout:Hit:4 https://security.ubuntu.com/ubuntu jammy-security InRelease 2026-03-10T13:53:29.897 INFO:teuthology.orchestra.run.vm07.stdout:Reading package lists... 2026-03-10T13:53:29.911 DEBUG:teuthology.parallel:result is None 2026-03-10T13:53:29.975 INFO:teuthology.orchestra.run.vm08.stdout:Reading package lists... 2026-03-10T13:53:29.989 DEBUG:teuthology.parallel:result is None 2026-03-10T13:53:30.512 INFO:teuthology.orchestra.run.vm00.stdout:Reading package lists... 2026-03-10T13:53:30.524 DEBUG:teuthology.parallel:result is None 2026-03-10T13:53:30.524 DEBUG:teuthology.run_tasks:Unwinding manager clock 2026-03-10T13:53:30.526 INFO:teuthology.task.clock:Checking final clock skew... 2026-03-10T13:53:30.526 DEBUG:teuthology.orchestra.run.vm00:> PATH=/usr/bin:/usr/sbin ntpq -p || PATH=/usr/bin:/usr/sbin chronyc sources || true 2026-03-10T13:53:30.527 DEBUG:teuthology.orchestra.run.vm07:> PATH=/usr/bin:/usr/sbin ntpq -p || PATH=/usr/bin:/usr/sbin chronyc sources || true 2026-03-10T13:53:30.528 DEBUG:teuthology.orchestra.run.vm08:> PATH=/usr/bin:/usr/sbin ntpq -p || PATH=/usr/bin:/usr/sbin chronyc sources || true 2026-03-10T13:53:30.585 INFO:teuthology.orchestra.run.vm08.stdout: remote refid st t when poll reach delay offset jitter 2026-03-10T13:53:30.585 INFO:teuthology.orchestra.run.vm08.stdout:============================================================================== 2026-03-10T13:53:30.586 INFO:teuthology.orchestra.run.vm08.stdout: 0.ubuntu.pool.n .POOL. 16 p - 64 0 0.000 +0.000 0.000 2026-03-10T13:53:30.586 INFO:teuthology.orchestra.run.vm08.stdout: 1.ubuntu.pool.n .POOL. 16 p - 64 0 0.000 +0.000 0.000 2026-03-10T13:53:30.586 INFO:teuthology.orchestra.run.vm08.stdout: 2.ubuntu.pool.n .POOL. 16 p - 64 0 0.000 +0.000 0.000 2026-03-10T13:53:30.586 INFO:teuthology.orchestra.run.vm08.stdout: 3.ubuntu.pool.n .POOL. 16 p - 64 0 0.000 +0.000 0.000 2026-03-10T13:53:30.586 INFO:teuthology.orchestra.run.vm08.stdout: ntp.ubuntu.com .POOL. 16 p - 64 0 0.000 +0.000 0.000 2026-03-10T13:53:30.586 INFO:teuthology.orchestra.run.vm08.stdout:+time.cloudflare 10.164.8.4 3 u 49 64 377 20.453 +1.212 0.689 2026-03-10T13:53:30.586 INFO:teuthology.orchestra.run.vm08.stdout:+node-3.infogral 168.239.11.197 2 u 51 64 377 23.510 +0.614 0.713 2026-03-10T13:53:30.586 INFO:teuthology.orchestra.run.vm08.stdout:*ntp1.aew1.soe.a .GPS. 1 u 48 64 377 25.361 +0.924 0.934 2026-03-10T13:53:30.586 INFO:teuthology.orchestra.run.vm08.stdout:-vps-fra2.orlean 169.254.169.254 4 u 47 64 377 20.952 -0.775 2.287 2026-03-10T13:53:30.586 INFO:teuthology.orchestra.run.vm08.stdout:+nur1.aup.dk 131.188.3.222 2 u 45 64 377 23.538 +0.938 0.949 2026-03-10T13:53:30.586 INFO:teuthology.orchestra.run.vm08.stdout:+static.buzo.eu 100.10.69.89 2 u 42 64 377 23.542 +1.103 1.085 2026-03-10T13:53:30.586 INFO:teuthology.orchestra.run.vm08.stdout:-mail.anyvm.tech 129.69.253.17 2 u 46 64 377 23.461 -0.471 1.095 2026-03-10T13:53:30.586 INFO:teuthology.orchestra.run.vm08.stdout:-ntp3.uni-ulm.de 129.69.253.1 2 u 50 64 377 27.208 -0.371 1.272 2026-03-10T13:53:30.586 INFO:teuthology.orchestra.run.vm08.stdout:-ntp2.uni-ulm.de 129.69.253.1 2 u 54 64 377 27.204 -0.949 0.680 2026-03-10T13:53:30.774 INFO:teuthology.orchestra.run.vm00.stdout: remote refid st t when poll reach delay offset jitter 2026-03-10T13:53:30.774 INFO:teuthology.orchestra.run.vm00.stdout:============================================================================== 2026-03-10T13:53:30.774 INFO:teuthology.orchestra.run.vm00.stdout: 0.ubuntu.pool.n .POOL. 16 p - 64 0 0.000 +0.000 0.000 2026-03-10T13:53:30.774 INFO:teuthology.orchestra.run.vm00.stdout: 1.ubuntu.pool.n .POOL. 16 p - 64 0 0.000 +0.000 0.000 2026-03-10T13:53:30.774 INFO:teuthology.orchestra.run.vm00.stdout: 2.ubuntu.pool.n .POOL. 16 p - 64 0 0.000 +0.000 0.000 2026-03-10T13:53:30.774 INFO:teuthology.orchestra.run.vm00.stdout: 3.ubuntu.pool.n .POOL. 16 p - 64 0 0.000 +0.000 0.000 2026-03-10T13:53:30.774 INFO:teuthology.orchestra.run.vm00.stdout: ntp.ubuntu.com .POOL. 16 p - 64 0 0.000 +0.000 0.000 2026-03-10T13:53:30.774 INFO:teuthology.orchestra.run.vm00.stdout:+time.cloudflare 10.216.8.4 3 u 49 64 377 20.426 +0.331 1.176 2026-03-10T13:53:30.774 INFO:teuthology.orchestra.run.vm00.stdout:+static.buzo.eu 100.10.69.89 2 u 51 64 377 23.560 +0.882 0.948 2026-03-10T13:53:30.774 INFO:teuthology.orchestra.run.vm00.stdout:-listserver.trex 131.188.3.223 2 u 50 64 377 25.080 -0.834 1.259 2026-03-10T13:53:30.774 INFO:teuthology.orchestra.run.vm00.stdout:+vps-fra2.orlean 169.254.169.254 4 u 51 64 377 20.932 +0.102 0.604 2026-03-10T13:53:30.774 INFO:teuthology.orchestra.run.vm00.stdout:*ntp1.aew1.soe.a .GPS. 1 u 46 64 377 25.316 +0.526 0.656 2026-03-10T13:53:30.774 INFO:teuthology.orchestra.run.vm00.stdout:+node-3.infogral 168.239.11.197 2 u 46 64 377 23.498 +0.596 0.654 2026-03-10T13:53:30.774 INFO:teuthology.orchestra.run.vm00.stdout:+nur1.aup.dk 131.188.3.222 2 u 42 64 377 23.521 +0.666 0.732 2026-03-10T13:53:30.774 INFO:teuthology.orchestra.run.vm00.stdout:-lb01.leardev.de 17.253.52.253 2 u 49 64 377 25.789 +0.436 0.727 2026-03-10T13:53:30.774 INFO:teuthology.orchestra.run.vm00.stdout:#ntp3.uni-ulm.de 129.69.253.1 2 u 41 64 377 27.351 -1.686 1.135 2026-03-10T13:53:30.774 INFO:teuthology.orchestra.run.vm00.stdout:-ntp2.uni-ulm.de 129.69.253.1 2 u 48 64 377 27.289 -0.719 0.597 2026-03-10T13:53:30.774 INFO:teuthology.orchestra.run.vm00.stdout:#desktopvm.r4yu. 80.153.195.191 3 u 45 64 377 34.122 -0.468 1.094 2026-03-10T13:53:30.774 INFO:teuthology.orchestra.run.vm00.stdout:-mail.anyvm.tech 129.69.253.17 2 u 46 64 377 23.498 -0.492 1.018 2026-03-10T13:53:30.774 INFO:teuthology.orchestra.run.vm00.stdout:#www.h4x-gamers. 237.17.204.95 2 u 49 64 377 25.056 -0.768 1.139 2026-03-10T13:53:30.775 INFO:teuthology.orchestra.run.vm07.stdout: remote refid st t when poll reach delay offset jitter 2026-03-10T13:53:30.775 INFO:teuthology.orchestra.run.vm07.stdout:============================================================================== 2026-03-10T13:53:30.775 INFO:teuthology.orchestra.run.vm07.stdout: 0.ubuntu.pool.n .POOL. 16 p - 64 0 0.000 +0.000 0.000 2026-03-10T13:53:30.775 INFO:teuthology.orchestra.run.vm07.stdout: 1.ubuntu.pool.n .POOL. 16 p - 64 0 0.000 +0.000 0.000 2026-03-10T13:53:30.775 INFO:teuthology.orchestra.run.vm07.stdout: 2.ubuntu.pool.n .POOL. 16 p - 64 0 0.000 +0.000 0.000 2026-03-10T13:53:30.775 INFO:teuthology.orchestra.run.vm07.stdout: 3.ubuntu.pool.n .POOL. 16 p - 64 0 0.000 +0.000 0.000 2026-03-10T13:53:30.775 INFO:teuthology.orchestra.run.vm07.stdout: ntp.ubuntu.com .POOL. 16 p - 64 0 0.000 +0.000 0.000 2026-03-10T13:53:30.775 INFO:teuthology.orchestra.run.vm07.stdout:+static.buzo.eu 100.10.69.89 2 u 50 64 377 23.497 -0.285 0.758 2026-03-10T13:53:30.775 INFO:teuthology.orchestra.run.vm07.stdout:-listserver.trex 131.188.3.223 2 u 50 64 377 25.089 -0.812 0.484 2026-03-10T13:53:30.775 INFO:teuthology.orchestra.run.vm07.stdout:-time.cloudflare 10.124.8.190 3 u 47 64 377 20.392 +0.390 1.089 2026-03-10T13:53:30.775 INFO:teuthology.orchestra.run.vm07.stdout:-vps-fra2.orlean 169.254.169.254 4 u 51 64 377 20.948 +1.149 1.392 2026-03-10T13:53:30.775 INFO:teuthology.orchestra.run.vm07.stdout:#82.165.178.31 82.64.45.50 2 u 44 64 377 27.213 +0.768 1.476 2026-03-10T13:53:30.775 INFO:teuthology.orchestra.run.vm07.stdout:-node-3.infogral 168.239.11.197 2 u 53 64 377 23.518 +1.299 2.269 2026-03-10T13:53:30.775 INFO:teuthology.orchestra.run.vm07.stdout:*ntp1.aew1.soe.a .GPS. 1 u 48 64 377 25.285 -0.238 0.841 2026-03-10T13:53:30.775 INFO:teuthology.orchestra.run.vm07.stdout:#desktopvm.r4yu. 80.153.195.191 3 u 47 64 377 33.841 +0.452 1.878 2026-03-10T13:53:30.775 INFO:teuthology.orchestra.run.vm07.stdout:-nur1.aup.dk 131.188.3.222 2 u 45 64 377 23.599 +1.160 1.822 2026-03-10T13:53:30.775 INFO:teuthology.orchestra.run.vm07.stdout:+ntp2.uni-ulm.de 129.69.253.1 2 u 44 64 377 27.420 +0.221 1.394 2026-03-10T13:53:30.775 INFO:teuthology.orchestra.run.vm07.stdout:-lb01.leardev.de 192.53.103.108 2 u 47 64 377 25.905 +0.314 0.727 2026-03-10T13:53:30.775 INFO:teuthology.orchestra.run.vm07.stdout:+ntp3.uni-ulm.de 129.69.253.1 2 u 46 64 377 27.357 -1.542 1.018 2026-03-10T13:53:30.775 DEBUG:teuthology.run_tasks:Unwinding manager ansible.cephlab 2026-03-10T13:53:30.777 INFO:teuthology.task.ansible:Skipping ansible cleanup... 2026-03-10T13:53:30.777 DEBUG:teuthology.run_tasks:Unwinding manager selinux 2026-03-10T13:53:30.779 DEBUG:teuthology.run_tasks:Unwinding manager pcp 2026-03-10T13:53:30.781 DEBUG:teuthology.run_tasks:Unwinding manager internal.timer 2026-03-10T13:53:30.783 INFO:teuthology.task.internal:Duration was 957.935307 seconds 2026-03-10T13:53:30.783 DEBUG:teuthology.run_tasks:Unwinding manager internal.syslog 2026-03-10T13:53:30.785 INFO:teuthology.task.internal.syslog:Shutting down syslog monitoring... 2026-03-10T13:53:30.785 DEBUG:teuthology.orchestra.run.vm00:> sudo rm -f -- /etc/rsyslog.d/80-cephtest.conf && sudo service rsyslog restart 2026-03-10T13:53:30.786 DEBUG:teuthology.orchestra.run.vm07:> sudo rm -f -- /etc/rsyslog.d/80-cephtest.conf && sudo service rsyslog restart 2026-03-10T13:53:30.788 DEBUG:teuthology.orchestra.run.vm08:> sudo rm -f -- /etc/rsyslog.d/80-cephtest.conf && sudo service rsyslog restart 2026-03-10T13:53:30.813 INFO:teuthology.task.internal.syslog:Checking logs for errors... 2026-03-10T13:53:30.813 DEBUG:teuthology.task.internal.syslog:Checking ubuntu@vm00.local 2026-03-10T13:53:30.814 DEBUG:teuthology.orchestra.run.vm00:> grep -E --binary-files=text '\bBUG\b|\bINFO\b|\bDEADLOCK\b' /home/ubuntu/cephtest/archive/syslog/kern.log | grep -v 'task .* blocked for more than .* seconds' | grep -v 'lockdep is turned off' | grep -v 'trying to register non-static key' | grep -v 'DEBUG: fsize' | grep -v CRON | grep -v 'BUG: bad unlock balance detected' | grep -v 'inconsistent lock state' | grep -v '*** DEADLOCK ***' | grep -v 'INFO: possible irq lock inversion dependency detected' | grep -v 'INFO: NMI handler (perf_event_nmi_handler) took too long to run' | grep -v 'INFO: recovery required on readonly' | grep -v 'ceph-create-keys: INFO' | grep -v INFO:ceph-create-keys | grep -v 'Loaded datasource DataSourceOpenStack' | grep -v 'container-storage-setup: INFO: Volume group backing root filesystem could not be determined' | grep -E -v '\bsalt-master\b|\bsalt-minion\b|\bsalt-api\b' | grep -v ceph-crash | grep -E -v '\btcmu-runner\b.*\bINFO\b' | head -n 1 2026-03-10T13:53:30.867 DEBUG:teuthology.task.internal.syslog:Checking ubuntu@vm07.local 2026-03-10T13:53:30.868 DEBUG:teuthology.orchestra.run.vm07:> grep -E --binary-files=text '\bBUG\b|\bINFO\b|\bDEADLOCK\b' /home/ubuntu/cephtest/archive/syslog/kern.log | grep -v 'task .* blocked for more than .* seconds' | grep -v 'lockdep is turned off' | grep -v 'trying to register non-static key' | grep -v 'DEBUG: fsize' | grep -v CRON | grep -v 'BUG: bad unlock balance detected' | grep -v 'inconsistent lock state' | grep -v '*** DEADLOCK ***' | grep -v 'INFO: possible irq lock inversion dependency detected' | grep -v 'INFO: NMI handler (perf_event_nmi_handler) took too long to run' | grep -v 'INFO: recovery required on readonly' | grep -v 'ceph-create-keys: INFO' | grep -v INFO:ceph-create-keys | grep -v 'Loaded datasource DataSourceOpenStack' | grep -v 'container-storage-setup: INFO: Volume group backing root filesystem could not be determined' | grep -E -v '\bsalt-master\b|\bsalt-minion\b|\bsalt-api\b' | grep -v ceph-crash | grep -E -v '\btcmu-runner\b.*\bINFO\b' | head -n 1 2026-03-10T13:53:30.880 DEBUG:teuthology.task.internal.syslog:Checking ubuntu@vm08.local 2026-03-10T13:53:30.880 DEBUG:teuthology.orchestra.run.vm08:> grep -E --binary-files=text '\bBUG\b|\bINFO\b|\bDEADLOCK\b' /home/ubuntu/cephtest/archive/syslog/kern.log | grep -v 'task .* blocked for more than .* seconds' | grep -v 'lockdep is turned off' | grep -v 'trying to register non-static key' | grep -v 'DEBUG: fsize' | grep -v CRON | grep -v 'BUG: bad unlock balance detected' | grep -v 'inconsistent lock state' | grep -v '*** DEADLOCK ***' | grep -v 'INFO: possible irq lock inversion dependency detected' | grep -v 'INFO: NMI handler (perf_event_nmi_handler) took too long to run' | grep -v 'INFO: recovery required on readonly' | grep -v 'ceph-create-keys: INFO' | grep -v INFO:ceph-create-keys | grep -v 'Loaded datasource DataSourceOpenStack' | grep -v 'container-storage-setup: INFO: Volume group backing root filesystem could not be determined' | grep -E -v '\bsalt-master\b|\bsalt-minion\b|\bsalt-api\b' | grep -v ceph-crash | grep -E -v '\btcmu-runner\b.*\bINFO\b' | head -n 1 2026-03-10T13:53:30.891 INFO:teuthology.task.internal.syslog:Gathering journactl... 2026-03-10T13:53:30.892 DEBUG:teuthology.orchestra.run.vm00:> sudo journalctl > /home/ubuntu/cephtest/archive/syslog/journalctl.log 2026-03-10T13:53:30.912 DEBUG:teuthology.orchestra.run.vm07:> sudo journalctl > /home/ubuntu/cephtest/archive/syslog/journalctl.log 2026-03-10T13:53:30.923 DEBUG:teuthology.orchestra.run.vm08:> sudo journalctl > /home/ubuntu/cephtest/archive/syslog/journalctl.log 2026-03-10T13:53:30.967 INFO:teuthology.task.internal.syslog:Compressing syslogs... 2026-03-10T13:53:30.967 DEBUG:teuthology.orchestra.run.vm00:> find /home/ubuntu/cephtest/archive/syslog -name '*.log' -print0 | sudo xargs -0 --max-args=1 --max-procs=0 --verbose --no-run-if-empty -- gzip -5 --verbose -- 2026-03-10T13:53:30.968 DEBUG:teuthology.orchestra.run.vm07:> find /home/ubuntu/cephtest/archive/syslog -name '*.log' -print0 | sudo xargs -0 --max-args=1 --max-procs=0 --verbose --no-run-if-empty -- gzip -5 --verbose -- 2026-03-10T13:53:30.974 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /home/ubuntu/cephtest/archive/syslog/misc.log 2026-03-10T13:53:30.975 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /home/ubuntu/cephtest/archive/syslog/kern.log 2026-03-10T13:53:30.975 INFO:teuthology.orchestra.run.vm00.stderr:/home/ubuntu/cephtest/archive/syslog/misc.log: 0.0% -- replaced with /home/ubuntu/cephtest/archive/syslog/misc.log.gz 2026-03-10T13:53:30.975 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /home/ubuntu/cephtest/archive/syslog/journalctl.log 2026-03-10T13:53:30.975 INFO:teuthology.orchestra.run.vm00.stderr:/home/ubuntu/cephtest/archive/syslog/kern.log: /home/ubuntu/cephtest/archive/syslog/journalctl.log: 0.0% -- replaced with /home/ubuntu/cephtest/archive/syslog/kern.log.gz 2026-03-10T13:53:30.984 INFO:teuthology.orchestra.run.vm00.stderr: 89.1% -- replaced with /home/ubuntu/cephtest/archive/syslog/journalctl.log.gz 2026-03-10T13:53:31.007 DEBUG:teuthology.orchestra.run.vm08:> find /home/ubuntu/cephtest/archive/syslog -name '*.log' -print0 | sudo xargs -0 --max-args=1 --max-procs=0 --verbose --no-run-if-empty -- gzip -5 --verbose -- 2026-03-10T13:53:31.013 INFO:teuthology.orchestra.run.vm07.stderr:gzip -5 --verbose -- /home/ubuntu/cephtest/archive/syslog/misc.log 2026-03-10T13:53:31.013 INFO:teuthology.orchestra.run.vm07.stderr:gzip -5 --verbose -- /home/ubuntu/cephtest/archive/syslog/kern.log 2026-03-10T13:53:31.014 INFO:teuthology.orchestra.run.vm07.stderr:gzip -5 --verbose -- /home/ubuntu/cephtest/archive/syslog/journalctl.log 2026-03-10T13:53:31.014 INFO:teuthology.orchestra.run.vm07.stderr:/home/ubuntu/cephtest/archive/syslog/misc.log: 0.0% -- replaced with /home/ubuntu/cephtest/archive/syslog/misc.log.gz 2026-03-10T13:53:31.014 INFO:teuthology.orchestra.run.vm07.stderr:/home/ubuntu/cephtest/archive/syslog/kern.log: 0.0% -- replaced with /home/ubuntu/cephtest/archive/syslog/kern.log.gz 2026-03-10T13:53:31.015 INFO:teuthology.orchestra.run.vm08.stderr:gzip -5 --verbose -- /home/ubuntu/cephtest/archive/syslog/misc.log 2026-03-10T13:53:31.015 INFO:teuthology.orchestra.run.vm08.stderr:gzip -5 --verbose -- /home/ubuntu/cephtest/archive/syslog/kern.log 2026-03-10T13:53:31.015 INFO:teuthology.orchestra.run.vm08.stderr:/home/ubuntu/cephtest/archive/syslog/misc.log: gzip 0.0% -5 -- replaced with /home/ubuntu/cephtest/archive/syslog/misc.log.gz --verbose 2026-03-10T13:53:31.015 INFO:teuthology.orchestra.run.vm08.stderr: -- /home/ubuntu/cephtest/archive/syslog/journalctl.log 2026-03-10T13:53:31.016 INFO:teuthology.orchestra.run.vm08.stderr:/home/ubuntu/cephtest/archive/syslog/kern.log: 0.0% -- replaced with /home/ubuntu/cephtest/archive/syslog/kern.log.gz 2026-03-10T13:53:31.021 INFO:teuthology.orchestra.run.vm07.stderr:/home/ubuntu/cephtest/archive/syslog/journalctl.log: 89.5% -- replaced with /home/ubuntu/cephtest/archive/syslog/journalctl.log.gz 2026-03-10T13:53:31.023 INFO:teuthology.orchestra.run.vm08.stderr:/home/ubuntu/cephtest/archive/syslog/journalctl.log: 89.3% -- replaced with /home/ubuntu/cephtest/archive/syslog/journalctl.log.gz 2026-03-10T13:53:31.024 DEBUG:teuthology.run_tasks:Unwinding manager internal.sudo 2026-03-10T13:53:31.027 INFO:teuthology.task.internal:Restoring /etc/sudoers... 2026-03-10T13:53:31.027 DEBUG:teuthology.orchestra.run.vm00:> sudo mv -f /etc/sudoers.orig.teuthology /etc/sudoers 2026-03-10T13:53:31.034 DEBUG:teuthology.orchestra.run.vm07:> sudo mv -f /etc/sudoers.orig.teuthology /etc/sudoers 2026-03-10T13:53:31.075 DEBUG:teuthology.orchestra.run.vm08:> sudo mv -f /etc/sudoers.orig.teuthology /etc/sudoers 2026-03-10T13:53:31.082 DEBUG:teuthology.run_tasks:Unwinding manager internal.coredump 2026-03-10T13:53:31.084 DEBUG:teuthology.orchestra.run.vm00:> sudo sysctl -w kernel.core_pattern=core && sudo bash -c 'for f in `find /home/ubuntu/cephtest/archive/coredump -type f`; do file $f | grep -q systemd-sysusers && rm $f || true ; done' && rmdir --ignore-fail-on-non-empty -- /home/ubuntu/cephtest/archive/coredump 2026-03-10T13:53:31.085 DEBUG:teuthology.orchestra.run.vm07:> sudo sysctl -w kernel.core_pattern=core && sudo bash -c 'for f in `find /home/ubuntu/cephtest/archive/coredump -type f`; do file $f | grep -q systemd-sysusers && rm $f || true ; done' && rmdir --ignore-fail-on-non-empty -- /home/ubuntu/cephtest/archive/coredump 2026-03-10T13:53:31.092 INFO:teuthology.orchestra.run.vm00.stdout:kernel.core_pattern = core 2026-03-10T13:53:31.119 DEBUG:teuthology.orchestra.run.vm08:> sudo sysctl -w kernel.core_pattern=core && sudo bash -c 'for f in `find /home/ubuntu/cephtest/archive/coredump -type f`; do file $f | grep -q systemd-sysusers && rm $f || true ; done' && rmdir --ignore-fail-on-non-empty -- /home/ubuntu/cephtest/archive/coredump 2026-03-10T13:53:31.125 INFO:teuthology.orchestra.run.vm07.stdout:kernel.core_pattern = core 2026-03-10T13:53:31.130 INFO:teuthology.orchestra.run.vm08.stdout:kernel.core_pattern = core 2026-03-10T13:53:31.138 DEBUG:teuthology.orchestra.run.vm00:> test -e /home/ubuntu/cephtest/archive/coredump 2026-03-10T13:53:31.145 DEBUG:teuthology.orchestra.run:got remote process result: 1 2026-03-10T13:53:31.146 DEBUG:teuthology.orchestra.run.vm07:> test -e /home/ubuntu/cephtest/archive/coredump 2026-03-10T13:53:31.176 DEBUG:teuthology.orchestra.run:got remote process result: 1 2026-03-10T13:53:31.176 DEBUG:teuthology.orchestra.run.vm08:> test -e /home/ubuntu/cephtest/archive/coredump 2026-03-10T13:53:31.182 DEBUG:teuthology.orchestra.run:got remote process result: 1 2026-03-10T13:53:31.182 DEBUG:teuthology.run_tasks:Unwinding manager internal.archive 2026-03-10T13:53:31.184 INFO:teuthology.task.internal:Transferring archived files... 2026-03-10T13:53:31.185 DEBUG:teuthology.misc:Transferring archived files from vm00:/home/ubuntu/cephtest/archive to /archive/kyr-2026-03-10_01:00:38-orch-squid-none-default-vps/1053/remote/vm00 2026-03-10T13:53:31.185 DEBUG:teuthology.orchestra.run.vm00:> sudo tar c -f - -C /home/ubuntu/cephtest/archive -- . 2026-03-10T13:53:31.196 DEBUG:teuthology.misc:Transferring archived files from vm07:/home/ubuntu/cephtest/archive to /archive/kyr-2026-03-10_01:00:38-orch-squid-none-default-vps/1053/remote/vm07 2026-03-10T13:53:31.196 DEBUG:teuthology.orchestra.run.vm07:> sudo tar c -f - -C /home/ubuntu/cephtest/archive -- . 2026-03-10T13:53:31.225 DEBUG:teuthology.misc:Transferring archived files from vm08:/home/ubuntu/cephtest/archive to /archive/kyr-2026-03-10_01:00:38-orch-squid-none-default-vps/1053/remote/vm08 2026-03-10T13:53:31.225 DEBUG:teuthology.orchestra.run.vm08:> sudo tar c -f - -C /home/ubuntu/cephtest/archive -- . 2026-03-10T13:53:31.232 INFO:teuthology.task.internal:Removing archive directory... 2026-03-10T13:53:31.232 DEBUG:teuthology.orchestra.run.vm00:> rm -rf -- /home/ubuntu/cephtest/archive 2026-03-10T13:53:31.240 DEBUG:teuthology.orchestra.run.vm07:> rm -rf -- /home/ubuntu/cephtest/archive 2026-03-10T13:53:31.267 DEBUG:teuthology.orchestra.run.vm08:> rm -rf -- /home/ubuntu/cephtest/archive 2026-03-10T13:53:31.279 DEBUG:teuthology.run_tasks:Unwinding manager internal.archive_upload 2026-03-10T13:53:31.281 INFO:teuthology.task.internal:Not uploading archives. 2026-03-10T13:53:31.281 DEBUG:teuthology.run_tasks:Unwinding manager internal.base 2026-03-10T13:53:31.283 INFO:teuthology.task.internal:Tidying up after the test... 2026-03-10T13:53:31.284 DEBUG:teuthology.orchestra.run.vm00:> find /home/ubuntu/cephtest -ls ; rmdir -- /home/ubuntu/cephtest 2026-03-10T13:53:31.284 DEBUG:teuthology.orchestra.run.vm07:> find /home/ubuntu/cephtest -ls ; rmdir -- /home/ubuntu/cephtest 2026-03-10T13:53:31.286 INFO:teuthology.orchestra.run.vm00.stdout: 258068 4 drwxr-xr-x 2 ubuntu ubuntu 4096 Mar 10 13:53 /home/ubuntu/cephtest 2026-03-10T13:53:31.311 DEBUG:teuthology.orchestra.run.vm08:> find /home/ubuntu/cephtest -ls ; rmdir -- /home/ubuntu/cephtest 2026-03-10T13:53:31.313 INFO:teuthology.orchestra.run.vm07.stdout: 258078 4 drwxr-xr-x 2 ubuntu ubuntu 4096 Mar 10 13:53 /home/ubuntu/cephtest 2026-03-10T13:53:31.323 INFO:teuthology.orchestra.run.vm08.stdout: 258080 4 drwxr-xr-x 2 ubuntu ubuntu 4096 Mar 10 13:53 /home/ubuntu/cephtest 2026-03-10T13:53:31.324 DEBUG:teuthology.run_tasks:Unwinding manager console_log 2026-03-10T13:53:31.329 INFO:teuthology.run:Summary data: description: orch/cephadm/workunits/{0-distro/ubuntu_22.04 agent/off mon_election/classic task/test_monitoring_stack_basic} duration: 957.9353065490723 flavor: default owner: kyr success: true 2026-03-10T13:53:31.329 DEBUG:teuthology.report:Pushing job info to http://localhost:8080 2026-03-10T13:53:31.347 INFO:teuthology.run:pass